Nov 23 23:00:03.795911 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 23 23:00:03.795935 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:53:53 -00 2025 Nov 23 23:00:03.795959 kernel: KASLR enabled Nov 23 23:00:03.795966 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Nov 23 23:00:03.795972 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Nov 23 23:00:03.795977 kernel: random: crng init done Nov 23 23:00:03.795984 kernel: secureboot: Secure boot disabled Nov 23 23:00:03.795990 kernel: ACPI: Early table checksum verification disabled Nov 23 23:00:03.795996 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Nov 23 23:00:03.796002 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Nov 23 23:00:03.796010 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:03.796016 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:03.796021 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:03.796027 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:03.796035 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:03.796043 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:03.796049 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:03.796055 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:03.796062 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:03.796068 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Nov 23 23:00:03.796074 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Nov 23 23:00:03.796080 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 23:00:03.796086 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Nov 23 23:00:03.796092 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Nov 23 23:00:03.796098 kernel: Zone ranges: Nov 23 23:00:03.796104 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 23 23:00:03.796111 kernel: DMA32 empty Nov 23 23:00:03.796117 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Nov 23 23:00:03.796123 kernel: Device empty Nov 23 23:00:03.796129 kernel: Movable zone start for each node Nov 23 23:00:03.796135 kernel: Early memory node ranges Nov 23 23:00:03.796141 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Nov 23 23:00:03.796147 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Nov 23 23:00:03.796153 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Nov 23 23:00:03.796159 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Nov 23 23:00:03.796165 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Nov 23 23:00:03.796171 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Nov 23 23:00:03.796177 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Nov 23 23:00:03.796184 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Nov 23 23:00:03.796190 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Nov 23 23:00:03.796199 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Nov 23 23:00:03.796206 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Nov 23 23:00:03.796212 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Nov 23 23:00:03.796220 kernel: psci: probing for conduit method from ACPI. Nov 23 23:00:03.796226 kernel: psci: PSCIv1.1 detected in firmware. Nov 23 23:00:03.796233 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 23:00:03.796239 kernel: psci: Trusted OS migration not required Nov 23 23:00:03.796245 kernel: psci: SMC Calling Convention v1.1 Nov 23 23:00:03.796252 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 23 23:00:03.796258 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 23:00:03.796265 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 23:00:03.796271 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 23 23:00:03.796277 kernel: Detected PIPT I-cache on CPU0 Nov 23 23:00:03.796284 kernel: CPU features: detected: GIC system register CPU interface Nov 23 23:00:03.796291 kernel: CPU features: detected: Spectre-v4 Nov 23 23:00:03.796298 kernel: CPU features: detected: Spectre-BHB Nov 23 23:00:03.796304 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 23 23:00:03.796311 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 23 23:00:03.796317 kernel: CPU features: detected: ARM erratum 1418040 Nov 23 23:00:03.796324 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 23 23:00:03.799424 kernel: alternatives: applying boot alternatives Nov 23 23:00:03.799439 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:00:03.799447 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 23:00:03.799454 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 23:00:03.799461 kernel: Fallback order for Node 0: 0 Nov 23 23:00:03.799474 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Nov 23 23:00:03.799480 kernel: Policy zone: Normal Nov 23 23:00:03.799487 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 23:00:03.799493 kernel: software IO TLB: area num 2. Nov 23 23:00:03.799500 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Nov 23 23:00:03.799506 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 23 23:00:03.799513 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 23:00:03.799520 kernel: rcu: RCU event tracing is enabled. Nov 23 23:00:03.799527 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 23 23:00:03.799534 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 23:00:03.799540 kernel: Tracing variant of Tasks RCU enabled. Nov 23 23:00:03.799547 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 23:00:03.799555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 23 23:00:03.799562 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 23:00:03.799569 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 23:00:03.799575 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 23:00:03.799581 kernel: GICv3: 256 SPIs implemented Nov 23 23:00:03.799588 kernel: GICv3: 0 Extended SPIs implemented Nov 23 23:00:03.799595 kernel: Root IRQ handler: gic_handle_irq Nov 23 23:00:03.799601 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 23 23:00:03.799608 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 23:00:03.799614 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 23 23:00:03.799620 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 23 23:00:03.799629 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Nov 23 23:00:03.799636 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Nov 23 23:00:03.799643 kernel: GICv3: using LPI property table @0x0000000100120000 Nov 23 23:00:03.799649 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Nov 23 23:00:03.799655 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 23:00:03.799662 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:00:03.799668 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 23 23:00:03.799675 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 23 23:00:03.799682 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 23 23:00:03.799688 kernel: Console: colour dummy device 80x25 Nov 23 23:00:03.799695 kernel: ACPI: Core revision 20240827 Nov 23 23:00:03.799704 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 23 23:00:03.799711 kernel: pid_max: default: 32768 minimum: 301 Nov 23 23:00:03.799718 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 23:00:03.799725 kernel: landlock: Up and running. Nov 23 23:00:03.799731 kernel: SELinux: Initializing. Nov 23 23:00:03.799738 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:00:03.799745 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:00:03.799752 kernel: rcu: Hierarchical SRCU implementation. Nov 23 23:00:03.799758 kernel: rcu: Max phase no-delay instances is 400. Nov 23 23:00:03.799767 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 23:00:03.799773 kernel: Remapping and enabling EFI services. Nov 23 23:00:03.799780 kernel: smp: Bringing up secondary CPUs ... Nov 23 23:00:03.799786 kernel: Detected PIPT I-cache on CPU1 Nov 23 23:00:03.799793 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 23 23:00:03.799800 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Nov 23 23:00:03.799807 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:00:03.799813 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 23 23:00:03.799820 kernel: smp: Brought up 1 node, 2 CPUs Nov 23 23:00:03.799827 kernel: SMP: Total of 2 processors activated. Nov 23 23:00:03.799840 kernel: CPU: All CPU(s) started at EL1 Nov 23 23:00:03.799847 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 23:00:03.799856 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 23 23:00:03.799863 kernel: CPU features: detected: Common not Private translations Nov 23 23:00:03.799871 kernel: CPU features: detected: CRC32 instructions Nov 23 23:00:03.799878 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 23 23:00:03.799885 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 23 23:00:03.799894 kernel: CPU features: detected: LSE atomic instructions Nov 23 23:00:03.799901 kernel: CPU features: detected: Privileged Access Never Nov 23 23:00:03.799908 kernel: CPU features: detected: RAS Extension Support Nov 23 23:00:03.799915 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 23 23:00:03.799923 kernel: alternatives: applying system-wide alternatives Nov 23 23:00:03.799930 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Nov 23 23:00:03.799938 kernel: Memory: 3858852K/4096000K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 215668K reserved, 16384K cma-reserved) Nov 23 23:00:03.799987 kernel: devtmpfs: initialized Nov 23 23:00:03.799995 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 23:00:03.800006 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 23 23:00:03.800013 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 23 23:00:03.800020 kernel: 0 pages in range for non-PLT usage Nov 23 23:00:03.800028 kernel: 508400 pages in range for PLT usage Nov 23 23:00:03.800035 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 23:00:03.800042 kernel: SMBIOS 3.0.0 present. Nov 23 23:00:03.800049 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Nov 23 23:00:03.800056 kernel: DMI: Memory slots populated: 1/1 Nov 23 23:00:03.800063 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 23:00:03.800072 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 23:00:03.800080 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 23:00:03.800087 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 23:00:03.800094 kernel: audit: initializing netlink subsys (disabled) Nov 23 23:00:03.800101 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Nov 23 23:00:03.800108 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 23:00:03.800115 kernel: cpuidle: using governor menu Nov 23 23:00:03.800122 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 23:00:03.800129 kernel: ASID allocator initialised with 32768 entries Nov 23 23:00:03.800137 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 23:00:03.800144 kernel: Serial: AMBA PL011 UART driver Nov 23 23:00:03.800151 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 23:00:03.800158 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 23:00:03.800165 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 23:00:03.800173 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 23:00:03.800180 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 23:00:03.800187 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 23:00:03.800194 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 23:00:03.800202 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 23:00:03.800209 kernel: ACPI: Added _OSI(Module Device) Nov 23 23:00:03.800216 kernel: ACPI: Added _OSI(Processor Device) Nov 23 23:00:03.800223 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 23:00:03.800230 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 23:00:03.800237 kernel: ACPI: Interpreter enabled Nov 23 23:00:03.800244 kernel: ACPI: Using GIC for interrupt routing Nov 23 23:00:03.800251 kernel: ACPI: MCFG table detected, 1 entries Nov 23 23:00:03.800258 kernel: ACPI: CPU0 has been hot-added Nov 23 23:00:03.800267 kernel: ACPI: CPU1 has been hot-added Nov 23 23:00:03.800274 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 23 23:00:03.800281 kernel: printk: legacy console [ttyAMA0] enabled Nov 23 23:00:03.800288 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 23 23:00:03.800487 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 23:00:03.800556 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 23:00:03.800616 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 23:00:03.800678 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 23 23:00:03.800735 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 23 23:00:03.800744 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 23 23:00:03.800751 kernel: PCI host bridge to bus 0000:00 Nov 23 23:00:03.800819 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 23 23:00:03.800874 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 23:00:03.800927 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 23 23:00:03.800999 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 23 23:00:03.801093 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 23 23:00:03.801167 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Nov 23 23:00:03.801229 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Nov 23 23:00:03.801288 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Nov 23 23:00:03.803524 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:03.803619 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Nov 23 23:00:03.803692 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 23 23:00:03.803754 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Nov 23 23:00:03.803817 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Nov 23 23:00:03.803887 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:03.803963 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Nov 23 23:00:03.804028 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 23 23:00:03.804087 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Nov 23 23:00:03.804165 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:03.804225 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Nov 23 23:00:03.804285 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 23 23:00:03.804371 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Nov 23 23:00:03.804433 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Nov 23 23:00:03.804507 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:03.804567 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Nov 23 23:00:03.804631 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 23 23:00:03.804693 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Nov 23 23:00:03.804752 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Nov 23 23:00:03.804819 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:03.804881 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Nov 23 23:00:03.804942 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 23 23:00:03.805022 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Nov 23 23:00:03.805087 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Nov 23 23:00:03.805157 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:03.805218 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Nov 23 23:00:03.805277 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 23 23:00:03.806466 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Nov 23 23:00:03.806588 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Nov 23 23:00:03.806663 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:03.806733 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Nov 23 23:00:03.806792 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 23 23:00:03.806852 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Nov 23 23:00:03.806913 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Nov 23 23:00:03.807017 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:03.807091 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Nov 23 23:00:03.807168 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 23 23:00:03.807238 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Nov 23 23:00:03.807318 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:03.808493 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Nov 23 23:00:03.808563 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 23 23:00:03.809432 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Nov 23 23:00:03.809526 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Nov 23 23:00:03.809596 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Nov 23 23:00:03.809674 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 23 23:00:03.809737 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Nov 23 23:00:03.809801 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 23 23:00:03.809862 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Nov 23 23:00:03.809936 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Nov 23 23:00:03.810017 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Nov 23 23:00:03.810093 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Nov 23 23:00:03.810157 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Nov 23 23:00:03.810226 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Nov 23 23:00:03.810297 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Nov 23 23:00:03.810461 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Nov 23 23:00:03.810543 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Nov 23 23:00:03.810612 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Nov 23 23:00:03.810676 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Nov 23 23:00:03.810757 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Nov 23 23:00:03.810824 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Nov 23 23:00:03.810890 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Nov 23 23:00:03.810984 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 23 23:00:03.811055 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Nov 23 23:00:03.811122 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Nov 23 23:00:03.811185 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Nov 23 23:00:03.811250 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Nov 23 23:00:03.811312 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Nov 23 23:00:03.811424 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Nov 23 23:00:03.811494 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Nov 23 23:00:03.811556 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Nov 23 23:00:03.811621 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Nov 23 23:00:03.811685 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 23 23:00:03.811748 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Nov 23 23:00:03.811809 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Nov 23 23:00:03.811874 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 23 23:00:03.811935 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Nov 23 23:00:03.812052 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Nov 23 23:00:03.812119 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 23 23:00:03.812179 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Nov 23 23:00:03.812238 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Nov 23 23:00:03.812301 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 23 23:00:03.812399 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Nov 23 23:00:03.812463 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Nov 23 23:00:03.812534 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 23 23:00:03.812595 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Nov 23 23:00:03.812653 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Nov 23 23:00:03.812717 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 23 23:00:03.812778 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Nov 23 23:00:03.812837 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Nov 23 23:00:03.812898 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 23 23:00:03.812976 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Nov 23 23:00:03.813038 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Nov 23 23:00:03.813100 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Nov 23 23:00:03.813160 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Nov 23 23:00:03.813220 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Nov 23 23:00:03.813286 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Nov 23 23:00:03.813396 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Nov 23 23:00:03.813473 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Nov 23 23:00:03.813537 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Nov 23 23:00:03.813595 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Nov 23 23:00:03.813656 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Nov 23 23:00:03.813716 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Nov 23 23:00:03.813776 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Nov 23 23:00:03.813835 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Nov 23 23:00:03.813895 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Nov 23 23:00:03.813974 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Nov 23 23:00:03.814042 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Nov 23 23:00:03.814102 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Nov 23 23:00:03.814164 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Nov 23 23:00:03.814224 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Nov 23 23:00:03.814289 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Nov 23 23:00:03.819467 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Nov 23 23:00:03.819571 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Nov 23 23:00:03.819644 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Nov 23 23:00:03.819710 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Nov 23 23:00:03.819772 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Nov 23 23:00:03.819838 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Nov 23 23:00:03.819904 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Nov 23 23:00:03.820029 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Nov 23 23:00:03.820097 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Nov 23 23:00:03.820163 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Nov 23 23:00:03.820223 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Nov 23 23:00:03.820286 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Nov 23 23:00:03.821477 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Nov 23 23:00:03.821573 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Nov 23 23:00:03.821643 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Nov 23 23:00:03.821707 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Nov 23 23:00:03.821767 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Nov 23 23:00:03.821830 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Nov 23 23:00:03.821889 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Nov 23 23:00:03.821971 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Nov 23 23:00:03.822045 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Nov 23 23:00:03.822107 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 23 23:00:03.822171 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Nov 23 23:00:03.822233 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 23 23:00:03.822292 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 23 23:00:03.823491 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Nov 23 23:00:03.823574 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Nov 23 23:00:03.823645 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Nov 23 23:00:03.823708 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 23 23:00:03.823777 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 23 23:00:03.823835 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Nov 23 23:00:03.823895 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Nov 23 23:00:03.823984 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Nov 23 23:00:03.824053 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Nov 23 23:00:03.824118 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 23 23:00:03.824180 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 23 23:00:03.824245 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Nov 23 23:00:03.824306 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Nov 23 23:00:03.825817 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Nov 23 23:00:03.825901 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 23 23:00:03.825987 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 23 23:00:03.826053 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Nov 23 23:00:03.826112 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Nov 23 23:00:03.826191 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Nov 23 23:00:03.826254 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Nov 23 23:00:03.826317 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 23 23:00:03.826400 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 23 23:00:03.826462 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Nov 23 23:00:03.826524 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Nov 23 23:00:03.826594 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Nov 23 23:00:03.826660 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Nov 23 23:00:03.826725 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 23 23:00:03.826798 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 23 23:00:03.826859 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Nov 23 23:00:03.826918 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 23 23:00:03.827011 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Nov 23 23:00:03.827091 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Nov 23 23:00:03.827160 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Nov 23 23:00:03.827224 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 23 23:00:03.827289 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 23 23:00:03.827416 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Nov 23 23:00:03.827486 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 23 23:00:03.827549 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 23 23:00:03.827610 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 23 23:00:03.827669 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Nov 23 23:00:03.827728 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 23 23:00:03.827791 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 23 23:00:03.827852 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Nov 23 23:00:03.827914 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Nov 23 23:00:03.828022 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Nov 23 23:00:03.828092 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 23 23:00:03.828147 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 23:00:03.828201 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 23 23:00:03.828269 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 23 23:00:03.828326 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Nov 23 23:00:03.828401 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Nov 23 23:00:03.828467 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Nov 23 23:00:03.828523 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Nov 23 23:00:03.828580 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Nov 23 23:00:03.828646 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Nov 23 23:00:03.828702 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Nov 23 23:00:03.828760 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Nov 23 23:00:03.828827 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 23 23:00:03.828882 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Nov 23 23:00:03.828938 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Nov 23 23:00:03.829022 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Nov 23 23:00:03.829080 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Nov 23 23:00:03.829134 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Nov 23 23:00:03.829199 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Nov 23 23:00:03.829255 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Nov 23 23:00:03.829310 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 23 23:00:03.829425 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Nov 23 23:00:03.829485 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Nov 23 23:00:03.829549 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 23 23:00:03.829616 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Nov 23 23:00:03.829674 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Nov 23 23:00:03.829728 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 23 23:00:03.829792 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Nov 23 23:00:03.829853 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Nov 23 23:00:03.829911 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Nov 23 23:00:03.829921 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 23:00:03.829929 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 23:00:03.829939 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 23:00:03.829982 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 23:00:03.829991 kernel: iommu: Default domain type: Translated Nov 23 23:00:03.829999 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 23:00:03.830007 kernel: efivars: Registered efivars operations Nov 23 23:00:03.830015 kernel: vgaarb: loaded Nov 23 23:00:03.830023 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 23:00:03.830031 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 23:00:03.830038 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 23:00:03.830049 kernel: pnp: PnP ACPI init Nov 23 23:00:03.830140 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 23 23:00:03.830152 kernel: pnp: PnP ACPI: found 1 devices Nov 23 23:00:03.830159 kernel: NET: Registered PF_INET protocol family Nov 23 23:00:03.830167 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 23:00:03.830175 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 23:00:03.830183 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 23:00:03.830191 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 23:00:03.830200 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 23:00:03.830208 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 23:00:03.830216 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:00:03.830224 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:00:03.830231 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 23:00:03.830299 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Nov 23 23:00:03.830311 kernel: PCI: CLS 0 bytes, default 64 Nov 23 23:00:03.830319 kernel: kvm [1]: HYP mode not available Nov 23 23:00:03.830340 kernel: Initialise system trusted keyrings Nov 23 23:00:03.830352 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 23:00:03.830359 kernel: Key type asymmetric registered Nov 23 23:00:03.830366 kernel: Asymmetric key parser 'x509' registered Nov 23 23:00:03.830374 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 23:00:03.830381 kernel: io scheduler mq-deadline registered Nov 23 23:00:03.830389 kernel: io scheduler kyber registered Nov 23 23:00:03.830397 kernel: io scheduler bfq registered Nov 23 23:00:03.830406 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 23 23:00:03.830477 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Nov 23 23:00:03.830543 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Nov 23 23:00:03.830602 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:03.830665 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Nov 23 23:00:03.830728 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Nov 23 23:00:03.830788 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:03.830853 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Nov 23 23:00:03.830940 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Nov 23 23:00:03.831019 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:03.831088 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Nov 23 23:00:03.831149 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Nov 23 23:00:03.831207 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:03.831270 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Nov 23 23:00:03.831394 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Nov 23 23:00:03.831466 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:03.831530 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Nov 23 23:00:03.831596 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Nov 23 23:00:03.831655 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:03.831720 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Nov 23 23:00:03.831780 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Nov 23 23:00:03.831840 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:03.831902 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Nov 23 23:00:03.831977 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Nov 23 23:00:03.832038 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:03.832052 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Nov 23 23:00:03.832113 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Nov 23 23:00:03.832171 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Nov 23 23:00:03.832229 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:03.832239 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 23:00:03.832247 kernel: ACPI: button: Power Button [PWRB] Nov 23 23:00:03.832254 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 23:00:03.832318 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Nov 23 23:00:03.832424 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Nov 23 23:00:03.832436 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 23:00:03.832444 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 23 23:00:03.832506 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Nov 23 23:00:03.832517 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Nov 23 23:00:03.832524 kernel: thunder_xcv, ver 1.0 Nov 23 23:00:03.832532 kernel: thunder_bgx, ver 1.0 Nov 23 23:00:03.832539 kernel: nicpf, ver 1.0 Nov 23 23:00:03.832546 kernel: nicvf, ver 1.0 Nov 23 23:00:03.832625 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 23:00:03.832686 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T23:00:03 UTC (1763938803) Nov 23 23:00:03.832696 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 23:00:03.832704 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 23 23:00:03.832712 kernel: watchdog: NMI not fully supported Nov 23 23:00:03.832719 kernel: watchdog: Hard watchdog permanently disabled Nov 23 23:00:03.832727 kernel: NET: Registered PF_INET6 protocol family Nov 23 23:00:03.832734 kernel: Segment Routing with IPv6 Nov 23 23:00:03.832744 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 23:00:03.832751 kernel: NET: Registered PF_PACKET protocol family Nov 23 23:00:03.832759 kernel: Key type dns_resolver registered Nov 23 23:00:03.832766 kernel: registered taskstats version 1 Nov 23 23:00:03.832774 kernel: Loading compiled-in X.509 certificates Nov 23 23:00:03.832781 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 00c36da29593053a7da9cd3c5945ae69451ce339' Nov 23 23:00:03.832788 kernel: Demotion targets for Node 0: null Nov 23 23:00:03.832796 kernel: Key type .fscrypt registered Nov 23 23:00:03.832803 kernel: Key type fscrypt-provisioning registered Nov 23 23:00:03.832810 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 23:00:03.832819 kernel: ima: Allocated hash algorithm: sha1 Nov 23 23:00:03.832826 kernel: ima: No architecture policies found Nov 23 23:00:03.832834 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 23:00:03.832841 kernel: clk: Disabling unused clocks Nov 23 23:00:03.832848 kernel: PM: genpd: Disabling unused power domains Nov 23 23:00:03.832856 kernel: Warning: unable to open an initial console. Nov 23 23:00:03.832863 kernel: Freeing unused kernel memory: 39552K Nov 23 23:00:03.832871 kernel: Run /init as init process Nov 23 23:00:03.832878 kernel: with arguments: Nov 23 23:00:03.832891 kernel: /init Nov 23 23:00:03.832899 kernel: with environment: Nov 23 23:00:03.832907 kernel: HOME=/ Nov 23 23:00:03.832915 kernel: TERM=linux Nov 23 23:00:03.832925 systemd[1]: Successfully made /usr/ read-only. Nov 23 23:00:03.832936 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:00:03.832979 systemd[1]: Detected virtualization kvm. Nov 23 23:00:03.832991 systemd[1]: Detected architecture arm64. Nov 23 23:00:03.832999 systemd[1]: Running in initrd. Nov 23 23:00:03.833007 systemd[1]: No hostname configured, using default hostname. Nov 23 23:00:03.833015 systemd[1]: Hostname set to . Nov 23 23:00:03.833022 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:00:03.833030 systemd[1]: Queued start job for default target initrd.target. Nov 23 23:00:03.833038 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:00:03.833046 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:00:03.833056 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 23:00:03.833064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:00:03.833072 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 23:00:03.833081 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 23:00:03.833090 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 23:00:03.833098 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 23:00:03.833106 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:00:03.833116 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:00:03.833126 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:00:03.833134 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:00:03.833141 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:00:03.833150 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:00:03.833158 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:00:03.833166 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:00:03.833174 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 23:00:03.833182 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 23:00:03.833191 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:00:03.833200 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:00:03.833208 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:00:03.833216 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:00:03.833223 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 23:00:03.833231 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:00:03.833239 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 23:00:03.833247 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 23:00:03.833257 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 23:00:03.833265 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:00:03.833273 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:00:03.833281 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:00:03.833289 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 23:00:03.833297 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:00:03.833349 systemd-journald[246]: Collecting audit messages is disabled. Nov 23 23:00:03.833387 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 23:00:03.833399 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:00:03.833407 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 23:00:03.833416 kernel: Bridge firewalling registered Nov 23 23:00:03.833424 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:00:03.833432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:00:03.833440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:00:03.833448 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:00:03.833458 systemd-journald[246]: Journal started Nov 23 23:00:03.833479 systemd-journald[246]: Runtime Journal (/run/log/journal/5a9165bfac794460845ebb34253aa225) is 8M, max 76.5M, 68.5M free. Nov 23 23:00:03.790436 systemd-modules-load[247]: Inserted module 'overlay' Nov 23 23:00:03.815793 systemd-modules-load[247]: Inserted module 'br_netfilter' Nov 23 23:00:03.836738 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 23:00:03.841367 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:00:03.844015 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:00:03.848020 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:00:03.852050 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:00:03.858409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:00:03.871612 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 23:00:03.872622 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:00:03.874529 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 23:00:03.879468 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:00:03.884580 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:00:03.910491 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:00:03.931498 systemd-resolved[285]: Positive Trust Anchors: Nov 23 23:00:03.931516 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:00:03.931549 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:00:03.946939 systemd-resolved[285]: Defaulting to hostname 'linux'. Nov 23 23:00:03.948525 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:00:03.949229 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:00:04.020385 kernel: SCSI subsystem initialized Nov 23 23:00:04.024382 kernel: Loading iSCSI transport class v2.0-870. Nov 23 23:00:04.032543 kernel: iscsi: registered transport (tcp) Nov 23 23:00:04.045428 kernel: iscsi: registered transport (qla4xxx) Nov 23 23:00:04.045535 kernel: QLogic iSCSI HBA Driver Nov 23 23:00:04.069487 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:00:04.093801 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:00:04.099774 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:00:04.146910 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 23:00:04.149703 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 23:00:04.210380 kernel: raid6: neonx8 gen() 15670 MB/s Nov 23 23:00:04.227406 kernel: raid6: neonx4 gen() 15735 MB/s Nov 23 23:00:04.244428 kernel: raid6: neonx2 gen() 13160 MB/s Nov 23 23:00:04.261394 kernel: raid6: neonx1 gen() 10395 MB/s Nov 23 23:00:04.278392 kernel: raid6: int64x8 gen() 6870 MB/s Nov 23 23:00:04.295386 kernel: raid6: int64x4 gen() 7312 MB/s Nov 23 23:00:04.312385 kernel: raid6: int64x2 gen() 6077 MB/s Nov 23 23:00:04.329405 kernel: raid6: int64x1 gen() 5025 MB/s Nov 23 23:00:04.329496 kernel: raid6: using algorithm neonx4 gen() 15735 MB/s Nov 23 23:00:04.346403 kernel: raid6: .... xor() 12287 MB/s, rmw enabled Nov 23 23:00:04.346485 kernel: raid6: using neon recovery algorithm Nov 23 23:00:04.351375 kernel: xor: measuring software checksum speed Nov 23 23:00:04.351438 kernel: 8regs : 20733 MB/sec Nov 23 23:00:04.352552 kernel: 32regs : 19487 MB/sec Nov 23 23:00:04.352619 kernel: arm64_neon : 28003 MB/sec Nov 23 23:00:04.352637 kernel: xor: using function: arm64_neon (28003 MB/sec) Nov 23 23:00:04.406400 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 23:00:04.416491 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:00:04.419576 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:00:04.457539 systemd-udevd[494]: Using default interface naming scheme 'v255'. Nov 23 23:00:04.462911 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:00:04.471522 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 23:00:04.497417 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Nov 23 23:00:04.529780 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:00:04.534524 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:00:04.601193 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:00:04.605989 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 23:00:04.699350 kernel: ACPI: bus type USB registered Nov 23 23:00:04.699402 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Nov 23 23:00:04.701524 kernel: usbcore: registered new interface driver usbfs Nov 23 23:00:04.701567 kernel: usbcore: registered new interface driver hub Nov 23 23:00:04.702360 kernel: scsi host0: Virtio SCSI HBA Nov 23 23:00:04.704673 kernel: usbcore: registered new device driver usb Nov 23 23:00:04.707346 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 23 23:00:04.707419 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 23 23:00:04.745358 kernel: sr 0:0:0:0: Power-on or device reset occurred Nov 23 23:00:04.747361 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Nov 23 23:00:04.747552 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 23 23:00:04.750354 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Nov 23 23:00:04.756407 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:00:04.757149 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:00:04.759197 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:00:04.763561 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:00:04.766552 kernel: sd 0:0:0:1: Power-on or device reset occurred Nov 23 23:00:04.770366 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 23 23:00:04.770533 kernel: sd 0:0:0:1: [sda] Write Protect is off Nov 23 23:00:04.770629 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Nov 23 23:00:04.770706 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 23 23:00:04.779603 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 23:00:04.779688 kernel: GPT:17805311 != 80003071 Nov 23 23:00:04.779712 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 23:00:04.779733 kernel: GPT:17805311 != 80003071 Nov 23 23:00:04.780453 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 23:00:04.780510 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:00:04.781814 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Nov 23 23:00:04.781722 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:00:04.786349 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 23 23:00:04.786548 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 23 23:00:04.787391 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 23 23:00:04.792000 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 23 23:00:04.792219 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 23 23:00:04.793736 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 23 23:00:04.797819 kernel: hub 1-0:1.0: USB hub found Nov 23 23:00:04.798011 kernel: hub 1-0:1.0: 4 ports detected Nov 23 23:00:04.800743 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 23 23:00:04.800964 kernel: hub 2-0:1.0: USB hub found Nov 23 23:00:04.801064 kernel: hub 2-0:1.0: 4 ports detected Nov 23 23:00:04.809383 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:00:04.868818 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 23 23:00:04.887474 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 23 23:00:04.895176 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 23 23:00:04.896036 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 23 23:00:04.898151 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 23:00:04.908705 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 23 23:00:04.911770 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:00:04.912511 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:00:04.913930 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:00:04.917493 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 23:00:04.920498 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 23:00:04.936123 disk-uuid[602]: Primary Header is updated. Nov 23 23:00:04.936123 disk-uuid[602]: Secondary Entries is updated. Nov 23 23:00:04.936123 disk-uuid[602]: Secondary Header is updated. Nov 23 23:00:04.941897 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:00:04.947359 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:00:05.035359 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 23 23:00:05.173682 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Nov 23 23:00:05.173762 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 23 23:00:05.174007 kernel: usbcore: registered new interface driver usbhid Nov 23 23:00:05.174562 kernel: usbhid: USB HID core driver Nov 23 23:00:05.276396 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Nov 23 23:00:05.401375 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Nov 23 23:00:05.454381 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Nov 23 23:00:05.966413 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:00:05.967603 disk-uuid[605]: The operation has completed successfully. Nov 23 23:00:06.020445 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 23:00:06.020561 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 23:00:06.049253 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 23:00:06.066478 sh[626]: Success Nov 23 23:00:06.083083 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 23:00:06.083156 kernel: device-mapper: uevent: version 1.0.3 Nov 23 23:00:06.083178 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 23:00:06.094362 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 23:00:06.145356 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 23:00:06.146887 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 23:00:06.161159 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 23:00:06.174353 kernel: BTRFS: device fsid 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (639) Nov 23 23:00:06.175770 kernel: BTRFS info (device dm-0): first mount of filesystem 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 Nov 23 23:00:06.175979 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:00:06.184373 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 23 23:00:06.184451 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 23:00:06.184476 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 23:00:06.186711 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 23:00:06.187400 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:00:06.189422 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 23:00:06.190606 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 23:00:06.192636 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 23:00:06.226396 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (674) Nov 23 23:00:06.227888 kernel: BTRFS info (device sda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:00:06.228006 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:00:06.232626 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 23 23:00:06.232684 kernel: BTRFS info (device sda6): turning on async discard Nov 23 23:00:06.232695 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 23:00:06.240389 kernel: BTRFS info (device sda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:00:06.240824 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 23:00:06.243264 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 23:00:06.351677 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:00:06.354198 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:00:06.388675 ignition[719]: Ignition 2.22.0 Nov 23 23:00:06.389368 ignition[719]: Stage: fetch-offline Nov 23 23:00:06.389412 ignition[719]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:06.389421 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:06.389507 ignition[719]: parsed url from cmdline: "" Nov 23 23:00:06.393269 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:00:06.389511 ignition[719]: no config URL provided Nov 23 23:00:06.389516 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:00:06.389522 ignition[719]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:00:06.389528 ignition[719]: failed to fetch config: resource requires networking Nov 23 23:00:06.389682 ignition[719]: Ignition finished successfully Nov 23 23:00:06.398907 systemd-networkd[815]: lo: Link UP Nov 23 23:00:06.398911 systemd-networkd[815]: lo: Gained carrier Nov 23 23:00:06.401230 systemd-networkd[815]: Enumeration completed Nov 23 23:00:06.401693 systemd-networkd[815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:06.401697 systemd-networkd[815]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:00:06.402264 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:00:06.405295 systemd[1]: Reached target network.target - Network. Nov 23 23:00:06.406481 systemd-networkd[815]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:06.406485 systemd-networkd[815]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:00:06.407089 systemd-networkd[815]: eth0: Link UP Nov 23 23:00:06.407232 systemd-networkd[815]: eth1: Link UP Nov 23 23:00:06.407826 systemd-networkd[815]: eth0: Gained carrier Nov 23 23:00:06.407838 systemd-networkd[815]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:06.409837 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 23 23:00:06.417152 systemd-networkd[815]: eth1: Gained carrier Nov 23 23:00:06.417174 systemd-networkd[815]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:06.443523 systemd-networkd[815]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 23 23:00:06.450504 ignition[819]: Ignition 2.22.0 Nov 23 23:00:06.450529 ignition[819]: Stage: fetch Nov 23 23:00:06.450743 ignition[819]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:06.450761 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:06.450885 ignition[819]: parsed url from cmdline: "" Nov 23 23:00:06.450893 ignition[819]: no config URL provided Nov 23 23:00:06.450902 ignition[819]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:00:06.450914 ignition[819]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:00:06.450999 ignition[819]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 23 23:00:06.451694 ignition[819]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 23 23:00:06.477478 systemd-networkd[815]: eth0: DHCPv4 address 159.69.184.20/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 23 23:00:06.652474 ignition[819]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Nov 23 23:00:06.659388 ignition[819]: GET result: OK Nov 23 23:00:06.660805 ignition[819]: parsing config with SHA512: 5bfd88c0f2fbdc99c39b26fafb152d1b225dae523acde4c256c8b8dbe61691facd28f1e0828e2ec22568711c19b911b363b2ab8f51d858fdc4fa3eb043a08868 Nov 23 23:00:06.669524 unknown[819]: fetched base config from "system" Nov 23 23:00:06.670044 ignition[819]: fetch: fetch complete Nov 23 23:00:06.669535 unknown[819]: fetched base config from "system" Nov 23 23:00:06.670049 ignition[819]: fetch: fetch passed Nov 23 23:00:06.669544 unknown[819]: fetched user config from "hetzner" Nov 23 23:00:06.670113 ignition[819]: Ignition finished successfully Nov 23 23:00:06.674220 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 23 23:00:06.677514 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 23:00:06.710419 ignition[827]: Ignition 2.22.0 Nov 23 23:00:06.711075 ignition[827]: Stage: kargs Nov 23 23:00:06.711304 ignition[827]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:06.711314 ignition[827]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:06.712212 ignition[827]: kargs: kargs passed Nov 23 23:00:06.712269 ignition[827]: Ignition finished successfully Nov 23 23:00:06.714693 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 23:00:06.717475 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 23:00:06.749528 ignition[833]: Ignition 2.22.0 Nov 23 23:00:06.750241 ignition[833]: Stage: disks Nov 23 23:00:06.750503 ignition[833]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:06.750514 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:06.753118 ignition[833]: disks: disks passed Nov 23 23:00:06.753602 ignition[833]: Ignition finished successfully Nov 23 23:00:06.756891 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 23:00:06.758148 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 23:00:06.759545 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 23:00:06.760672 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:00:06.761167 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:00:06.762453 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:00:06.764174 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 23:00:06.806210 systemd-fsck[841]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Nov 23 23:00:06.811541 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 23:00:06.816788 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 23:00:06.894368 kernel: EXT4-fs (sda9): mounted filesystem fa3f8731-d4e3-4e51-b6db-fa404206cf07 r/w with ordered data mode. Quota mode: none. Nov 23 23:00:06.895901 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 23:00:06.898306 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 23:00:06.903440 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:00:06.906551 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 23:00:06.910351 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 23 23:00:06.910967 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 23:00:06.910998 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:00:06.922623 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 23:00:06.924647 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 23:00:06.939355 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (849) Nov 23 23:00:06.947119 kernel: BTRFS info (device sda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:00:06.947218 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:00:06.955850 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 23 23:00:06.955915 kernel: BTRFS info (device sda6): turning on async discard Nov 23 23:00:06.955941 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 23:00:06.960885 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:00:06.977383 coreos-metadata[851]: Nov 23 23:00:06.976 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 23 23:00:06.979018 coreos-metadata[851]: Nov 23 23:00:06.978 INFO Fetch successful Nov 23 23:00:06.982445 coreos-metadata[851]: Nov 23 23:00:06.980 INFO wrote hostname ci-4459-2-1-9-52b78fad11 to /sysroot/etc/hostname Nov 23 23:00:06.987164 initrd-setup-root[876]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 23:00:06.988915 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 23 23:00:06.996146 initrd-setup-root[884]: cut: /sysroot/etc/group: No such file or directory Nov 23 23:00:07.001309 initrd-setup-root[891]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 23:00:07.005508 initrd-setup-root[898]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 23:00:07.115433 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 23:00:07.117278 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 23:00:07.121026 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 23:00:07.139364 kernel: BTRFS info (device sda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:00:07.158627 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 23:00:07.172737 ignition[967]: INFO : Ignition 2.22.0 Nov 23 23:00:07.172737 ignition[967]: INFO : Stage: mount Nov 23 23:00:07.174664 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:07.174664 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:07.174664 ignition[967]: INFO : mount: mount passed Nov 23 23:00:07.174664 ignition[967]: INFO : Ignition finished successfully Nov 23 23:00:07.177437 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 23:00:07.177845 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 23:00:07.180517 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 23:00:07.203545 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:00:07.228394 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (978) Nov 23 23:00:07.229948 kernel: BTRFS info (device sda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:00:07.230030 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:00:07.236769 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 23 23:00:07.236855 kernel: BTRFS info (device sda6): turning on async discard Nov 23 23:00:07.236885 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 23:00:07.239732 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:00:07.272138 ignition[995]: INFO : Ignition 2.22.0 Nov 23 23:00:07.272892 ignition[995]: INFO : Stage: files Nov 23 23:00:07.273488 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:07.274068 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:07.275775 ignition[995]: DEBUG : files: compiled without relabeling support, skipping Nov 23 23:00:07.277681 ignition[995]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 23:00:07.278422 ignition[995]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 23:00:07.283233 ignition[995]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 23:00:07.284671 ignition[995]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 23:00:07.287258 unknown[995]: wrote ssh authorized keys file for user: core Nov 23 23:00:07.289086 ignition[995]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 23:00:07.293207 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 23:00:07.294633 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 23 23:00:07.398746 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 23:00:07.482732 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 23:00:07.482732 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 23:00:07.485463 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 23:00:07.485463 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:00:07.485463 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:00:07.485463 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:00:07.485463 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:00:07.485463 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:00:07.485463 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:00:07.493600 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:00:07.493600 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:00:07.493600 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 23:00:07.500079 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 23:00:07.500079 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 23:00:07.500079 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 23 23:00:07.617523 systemd-networkd[815]: eth0: Gained IPv6LL Nov 23 23:00:07.687883 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 23:00:08.001589 systemd-networkd[815]: eth1: Gained IPv6LL Nov 23 23:00:08.844564 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 23:00:08.844564 ignition[995]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 23:00:08.848225 ignition[995]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:00:08.852383 ignition[995]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:00:08.852383 ignition[995]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 23:00:08.852383 ignition[995]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 23 23:00:08.852383 ignition[995]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 23 23:00:08.852383 ignition[995]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 23 23:00:08.852383 ignition[995]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 23 23:00:08.852383 ignition[995]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 23 23:00:08.852383 ignition[995]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 23:00:08.864165 ignition[995]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:00:08.864165 ignition[995]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:00:08.864165 ignition[995]: INFO : files: files passed Nov 23 23:00:08.864165 ignition[995]: INFO : Ignition finished successfully Nov 23 23:00:08.856366 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 23:00:08.858904 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 23:00:08.864544 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 23:00:08.882665 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 23:00:08.882794 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 23:00:08.890378 initrd-setup-root-after-ignition[1025]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:00:08.890378 initrd-setup-root-after-ignition[1025]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:00:08.892457 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:00:08.896370 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:00:08.897514 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 23:00:08.899316 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 23:00:08.957941 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 23:00:08.958096 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 23:00:08.959952 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 23:00:08.961634 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 23:00:08.963663 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 23:00:08.964730 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 23:00:09.005420 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:00:09.010965 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 23:00:09.034167 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:00:09.035181 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:00:09.036845 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 23:00:09.037943 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 23:00:09.038066 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:00:09.039612 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 23:00:09.040241 systemd[1]: Stopped target basic.target - Basic System. Nov 23 23:00:09.041341 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 23:00:09.042465 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:00:09.043479 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 23:00:09.044659 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:00:09.045970 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 23:00:09.047054 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:00:09.048243 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 23:00:09.049301 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 23:00:09.050504 systemd[1]: Stopped target swap.target - Swaps. Nov 23 23:00:09.051457 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 23:00:09.051586 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:00:09.052943 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:00:09.054078 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:00:09.055146 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 23:00:09.058414 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:00:09.060303 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 23:00:09.060601 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 23:00:09.063186 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 23:00:09.063429 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:00:09.065214 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 23:00:09.065339 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 23:00:09.066506 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 23 23:00:09.066601 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 23 23:00:09.068440 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 23:00:09.072294 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 23:00:09.074482 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 23:00:09.074638 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:00:09.078235 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 23:00:09.078369 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:00:09.089445 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 23:00:09.090547 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 23:00:09.098755 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 23:00:09.102860 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 23:00:09.103570 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 23:00:09.104965 ignition[1049]: INFO : Ignition 2.22.0 Nov 23 23:00:09.104965 ignition[1049]: INFO : Stage: umount Nov 23 23:00:09.104965 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:09.104965 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:09.104965 ignition[1049]: INFO : umount: umount passed Nov 23 23:00:09.104965 ignition[1049]: INFO : Ignition finished successfully Nov 23 23:00:09.106604 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 23:00:09.106713 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 23:00:09.107872 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 23:00:09.108016 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 23:00:09.110279 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 23:00:09.111417 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 23:00:09.112001 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 23 23:00:09.112043 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 23 23:00:09.112985 systemd[1]: Stopped target network.target - Network. Nov 23 23:00:09.114231 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 23:00:09.114299 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:00:09.115480 systemd[1]: Stopped target paths.target - Path Units. Nov 23 23:00:09.116265 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 23:00:09.120467 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:00:09.121650 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 23:00:09.122847 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 23:00:09.123781 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 23:00:09.123824 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:00:09.125325 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 23:00:09.125381 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:00:09.126235 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 23:00:09.126287 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 23:00:09.127177 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 23:00:09.127211 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 23:00:09.128137 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 23:00:09.128184 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 23:00:09.129312 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 23:00:09.130687 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 23:00:09.137312 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 23:00:09.138990 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 23:00:09.144655 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 23:00:09.144995 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 23:00:09.145147 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 23:00:09.149364 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 23:00:09.151156 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 23:00:09.152361 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 23:00:09.152426 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:00:09.154719 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 23:00:09.155859 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 23:00:09.155961 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:00:09.156771 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 23:00:09.156816 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:00:09.159666 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 23:00:09.159725 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 23:00:09.161471 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 23:00:09.161523 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:00:09.165157 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:00:09.168352 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 23:00:09.168448 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:00:09.181832 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 23:00:09.182055 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:00:09.185619 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 23:00:09.186443 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 23:00:09.188089 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 23:00:09.188127 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:00:09.188904 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 23:00:09.188994 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:00:09.193478 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 23:00:09.193540 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 23:00:09.195508 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 23:00:09.195567 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:00:09.198639 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 23:00:09.201538 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 23:00:09.201614 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:00:09.206716 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 23:00:09.206788 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:00:09.210214 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 23 23:00:09.210292 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:00:09.213525 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 23:00:09.213606 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:00:09.215512 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:00:09.215564 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:00:09.219087 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 23 23:00:09.219143 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 23 23:00:09.219172 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 23 23:00:09.219203 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:00:09.219590 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 23:00:09.219794 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 23:00:09.220980 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 23:00:09.221067 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 23:00:09.223000 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 23:00:09.224766 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 23:00:09.260848 systemd[1]: Switching root. Nov 23 23:00:09.305736 systemd-journald[246]: Journal stopped Nov 23 23:00:10.319271 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). Nov 23 23:00:10.322294 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 23:00:10.322360 kernel: SELinux: policy capability open_perms=1 Nov 23 23:00:10.322404 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 23:00:10.322417 kernel: SELinux: policy capability always_check_network=0 Nov 23 23:00:10.322426 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 23:00:10.322437 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 23:00:10.322447 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 23:00:10.322461 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 23:00:10.322471 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 23:00:10.322483 kernel: audit: type=1403 audit(1763938809.483:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 23:00:10.322499 systemd[1]: Successfully loaded SELinux policy in 53.104ms. Nov 23 23:00:10.322520 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.196ms. Nov 23 23:00:10.322532 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:00:10.322543 systemd[1]: Detected virtualization kvm. Nov 23 23:00:10.322553 systemd[1]: Detected architecture arm64. Nov 23 23:00:10.322564 systemd[1]: Detected first boot. Nov 23 23:00:10.322578 systemd[1]: Hostname set to . Nov 23 23:00:10.322591 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:00:10.322606 kernel: NET: Registered PF_VSOCK protocol family Nov 23 23:00:10.322616 zram_generator::config[1094]: No configuration found. Nov 23 23:00:10.322631 systemd[1]: Populated /etc with preset unit settings. Nov 23 23:00:10.322645 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 23:00:10.322658 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 23:00:10.322671 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 23:00:10.322682 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 23:00:10.322693 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 23:00:10.322704 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 23:00:10.322714 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 23:00:10.322725 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 23:00:10.322735 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 23:00:10.322745 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 23:00:10.322757 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 23:00:10.322767 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 23:00:10.322782 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:00:10.322793 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:00:10.322803 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 23:00:10.322817 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 23:00:10.322828 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 23:00:10.322838 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:00:10.322850 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 23 23:00:10.322861 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:00:10.322872 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:00:10.322882 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 23:00:10.322892 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 23:00:10.322903 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 23:00:10.322932 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 23:00:10.322946 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:00:10.322958 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:00:10.322968 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:00:10.322980 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:00:10.322991 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 23:00:10.323002 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 23:00:10.323012 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 23:00:10.323023 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:00:10.323033 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:00:10.323044 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:00:10.323055 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 23:00:10.323066 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 23:00:10.323076 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 23:00:10.323087 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 23:00:10.323097 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 23:00:10.323107 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 23:00:10.323117 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 23:00:10.323128 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 23:00:10.323140 systemd[1]: Reached target machines.target - Containers. Nov 23 23:00:10.323151 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 23:00:10.323161 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:00:10.323172 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:00:10.323183 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 23:00:10.323193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:00:10.323204 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:00:10.323216 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:00:10.323227 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 23:00:10.323237 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:00:10.323249 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 23:00:10.323259 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 23:00:10.323269 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 23:00:10.323279 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 23:00:10.323289 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 23:00:10.323301 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:00:10.323313 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:00:10.323324 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:00:10.323347 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:00:10.323358 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 23:00:10.323374 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 23:00:10.323385 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:00:10.323395 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 23:00:10.323405 systemd[1]: Stopped verity-setup.service. Nov 23 23:00:10.323416 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 23:00:10.323426 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 23:00:10.323436 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 23:00:10.323448 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 23:00:10.323458 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 23:00:10.323469 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 23:00:10.323479 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:00:10.323490 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 23:00:10.323500 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 23:00:10.323510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:00:10.323521 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:00:10.323532 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:00:10.323543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:00:10.323553 kernel: loop: module loaded Nov 23 23:00:10.323563 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 23:00:10.323573 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:00:10.323584 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:00:10.323595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:00:10.323606 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:00:10.323617 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:00:10.323627 kernel: fuse: init (API version 7.41) Nov 23 23:00:10.323638 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 23:00:10.323649 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 23:00:10.323660 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:00:10.323670 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 23:00:10.323681 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:00:10.323691 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 23:00:10.323702 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 23:00:10.323713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:00:10.323725 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 23:00:10.323737 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:00:10.323752 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 23:00:10.323763 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:00:10.323773 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:00:10.323832 systemd-journald[1158]: Collecting audit messages is disabled. Nov 23 23:00:10.323856 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 23:00:10.323867 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 23:00:10.323879 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 23:00:10.323891 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 23:00:10.323903 systemd-journald[1158]: Journal started Nov 23 23:00:10.323975 systemd-journald[1158]: Runtime Journal (/run/log/journal/5a9165bfac794460845ebb34253aa225) is 8M, max 76.5M, 68.5M free. Nov 23 23:00:10.338668 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:00:10.338730 kernel: ACPI: bus type drm_connector registered Nov 23 23:00:09.993048 systemd[1]: Queued start job for default target multi-user.target. Nov 23 23:00:10.020824 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 23 23:00:10.021694 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 23:00:10.331443 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 23:00:10.336055 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:00:10.337397 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:00:10.372865 kernel: loop0: detected capacity change from 0 to 100632 Nov 23 23:00:10.375572 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 23:00:10.377754 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 23:00:10.388508 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 23:00:10.401203 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 23:00:10.403425 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 23:00:10.417467 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Nov 23 23:00:10.418094 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Nov 23 23:00:10.418762 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:00:10.424254 systemd-journald[1158]: Time spent on flushing to /var/log/journal/5a9165bfac794460845ebb34253aa225 is 36.956ms for 1182 entries. Nov 23 23:00:10.424254 systemd-journald[1158]: System Journal (/var/log/journal/5a9165bfac794460845ebb34253aa225) is 8M, max 584.8M, 576.8M free. Nov 23 23:00:10.472123 systemd-journald[1158]: Received client request to flush runtime journal. Nov 23 23:00:10.472163 kernel: loop1: detected capacity change from 0 to 119840 Nov 23 23:00:10.472181 kernel: loop2: detected capacity change from 0 to 8 Nov 23 23:00:10.432136 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:00:10.435870 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 23:00:10.438883 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:00:10.477603 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 23:00:10.490572 kernel: loop3: detected capacity change from 0 to 211168 Nov 23 23:00:10.494715 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 23:00:10.514382 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 23:00:10.517656 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:00:10.527547 kernel: loop4: detected capacity change from 0 to 100632 Nov 23 23:00:10.546358 kernel: loop5: detected capacity change from 0 to 119840 Nov 23 23:00:10.551489 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Nov 23 23:00:10.551509 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Nov 23 23:00:10.556043 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:00:10.570428 kernel: loop6: detected capacity change from 0 to 8 Nov 23 23:00:10.572370 kernel: loop7: detected capacity change from 0 to 211168 Nov 23 23:00:10.586466 (sd-merge)[1237]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 23 23:00:10.587098 (sd-merge)[1237]: Merged extensions into '/usr'. Nov 23 23:00:10.591102 systemd[1]: Reload requested from client PID 1192 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 23:00:10.591124 systemd[1]: Reloading... Nov 23 23:00:10.688553 zram_generator::config[1265]: No configuration found. Nov 23 23:00:10.896202 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 23:00:10.896482 systemd[1]: Reloading finished in 304 ms. Nov 23 23:00:10.915547 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 23:00:10.926261 systemd[1]: Starting ensure-sysext.service... Nov 23 23:00:10.930132 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:00:10.952029 ldconfig[1180]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 23:00:10.954456 systemd[1]: Reload requested from client PID 1301 ('systemctl') (unit ensure-sysext.service)... Nov 23 23:00:10.954474 systemd[1]: Reloading... Nov 23 23:00:10.978360 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 23:00:10.979006 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 23:00:10.979381 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 23:00:10.979687 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 23:00:10.982120 systemd-tmpfiles[1302]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 23:00:10.982542 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Nov 23 23:00:10.982697 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Nov 23 23:00:10.989376 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:00:10.989386 systemd-tmpfiles[1302]: Skipping /boot Nov 23 23:00:10.999241 systemd-tmpfiles[1302]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:00:11.000509 systemd-tmpfiles[1302]: Skipping /boot Nov 23 23:00:11.032361 zram_generator::config[1327]: No configuration found. Nov 23 23:00:11.209176 systemd[1]: Reloading finished in 254 ms. Nov 23 23:00:11.225373 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 23:00:11.226435 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 23:00:11.232295 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:00:11.239500 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 23:00:11.242613 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:00:11.245590 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 23:00:11.250390 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 23:00:11.259177 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:00:11.263107 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:00:11.268665 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 23:00:11.273216 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 23:00:11.280239 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:00:11.282754 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:00:11.289695 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:00:11.295952 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:00:11.297552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:00:11.297682 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:00:11.306668 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 23:00:11.311895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:00:11.312137 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:00:11.312221 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:00:11.321177 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:00:11.325193 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:00:11.326685 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:00:11.326816 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:00:11.329460 systemd[1]: Finished ensure-sysext.service. Nov 23 23:00:11.330787 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:00:11.331580 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:00:11.336756 systemd-udevd[1376]: Using default interface naming scheme 'v255'. Nov 23 23:00:11.344918 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 23 23:00:11.353793 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 23:00:11.365204 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 23:00:11.369168 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 23:00:11.371947 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:00:11.377386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:00:11.378437 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:00:11.378601 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:00:11.382214 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:00:11.384685 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:00:11.386556 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:00:11.387622 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 23:00:11.388759 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:00:11.388789 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 23:00:11.397756 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:00:11.402749 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:00:11.404768 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 23:00:11.426727 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 23:00:11.431344 augenrules[1431]: No rules Nov 23 23:00:11.432586 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:00:11.435419 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:00:11.535057 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 23 23:00:11.629414 systemd-networkd[1413]: lo: Link UP Nov 23 23:00:11.629427 systemd-networkd[1413]: lo: Gained carrier Nov 23 23:00:11.631661 systemd-networkd[1413]: Enumeration completed Nov 23 23:00:11.631782 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:00:11.634420 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 23:00:11.635537 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:11.635550 systemd-networkd[1413]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:00:11.637644 systemd-networkd[1413]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:11.637658 systemd-networkd[1413]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:00:11.638059 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 23:00:11.638960 systemd-networkd[1413]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:11.638992 systemd-networkd[1413]: eth0: Link UP Nov 23 23:00:11.639106 systemd-networkd[1413]: eth0: Gained carrier Nov 23 23:00:11.639124 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:11.645613 systemd-networkd[1413]: eth1: Link UP Nov 23 23:00:11.646558 systemd-networkd[1413]: eth1: Gained carrier Nov 23 23:00:11.646586 systemd-networkd[1413]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:11.681064 kernel: mousedev: PS/2 mouse device common for all mice Nov 23 23:00:11.685839 systemd-networkd[1413]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 23 23:00:11.690991 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 23:00:11.698428 systemd-networkd[1413]: eth0: DHCPv4 address 159.69.184.20/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 23 23:00:11.722379 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 23 23:00:11.723506 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 23:00:11.766936 systemd-resolved[1375]: Positive Trust Anchors: Nov 23 23:00:11.766955 systemd-resolved[1375]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:00:11.766986 systemd-resolved[1375]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:00:11.772672 systemd-resolved[1375]: Using system hostname 'ci-4459-2-1-9-52b78fad11'. Nov 23 23:00:11.774281 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:00:11.775212 systemd[1]: Reached target network.target - Network. Nov 23 23:00:11.776420 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:00:11.777146 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:00:11.777879 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 23:00:11.779050 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 23:00:11.780209 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 23:00:11.781299 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 23:00:11.782274 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 23:00:11.783124 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 23:00:11.783159 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:00:11.783985 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:00:11.786561 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 23:00:11.789371 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 23:00:11.792938 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 23:00:11.795648 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 23:00:11.797019 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 23:00:11.800946 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 23:00:11.802775 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 23:00:11.805122 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 23:00:11.810191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 23 23:00:11.811740 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:00:11.812697 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:00:11.813420 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:00:11.813455 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:00:11.815234 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 23:00:11.819593 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 23 23:00:11.822961 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 23:00:11.826772 systemd-timesyncd[1392]: Contacted time server 168.119.211.223:123 (0.flatcar.pool.ntp.org). Nov 23 23:00:11.826824 systemd-timesyncd[1392]: Initial clock synchronization to Sun 2025-11-23 23:00:12.151653 UTC. Nov 23 23:00:11.827742 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 23:00:11.837118 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 23:00:11.841385 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 23:00:11.843369 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 23:00:11.847293 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 23:00:11.852637 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 23:00:11.858631 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 23:00:11.863113 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 23:00:11.867566 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 23:00:11.879682 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 23:00:11.882305 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 23:00:11.882850 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 23:00:11.886857 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 23:00:11.891554 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 23:00:11.893863 coreos-metadata[1484]: Nov 23 23:00:11.893 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 23 23:00:11.899381 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 23:00:11.901799 coreos-metadata[1484]: Nov 23 23:00:11.899 INFO Fetch successful Nov 23 23:00:11.901799 coreos-metadata[1484]: Nov 23 23:00:11.899 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 23 23:00:11.903282 coreos-metadata[1484]: Nov 23 23:00:11.901 INFO Fetch successful Nov 23 23:00:11.905209 jq[1487]: false Nov 23 23:00:11.910892 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 23:00:11.911391 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 23:00:11.921203 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 23:00:11.933350 extend-filesystems[1490]: Found /dev/sda6 Nov 23 23:00:11.945392 extend-filesystems[1490]: Found /dev/sda9 Nov 23 23:00:11.944435 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 23:00:11.944893 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 23:00:11.945169 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 23:00:11.946374 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 23:00:11.958558 extend-filesystems[1490]: Checking size of /dev/sda9 Nov 23 23:00:11.960981 jq[1498]: true Nov 23 23:00:11.966158 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 23 23:00:11.981460 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 23 23:00:11.991787 tar[1502]: linux-arm64/LICENSE Nov 23 23:00:11.992150 tar[1502]: linux-arm64/helm Nov 23 23:00:11.999762 (ntainerd)[1525]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 23:00:12.015733 jq[1535]: true Nov 23 23:00:12.019924 extend-filesystems[1490]: Resized partition /dev/sda9 Nov 23 23:00:12.044438 extend-filesystems[1550]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 23:00:12.050242 update_engine[1497]: I20251123 23:00:12.046377 1497 main.cc:92] Flatcar Update Engine starting Nov 23 23:00:12.050147 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 23:00:12.049868 dbus-daemon[1485]: [system] SELinux support is enabled Nov 23 23:00:12.054060 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 23:00:12.054098 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 23:00:12.054918 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 23:00:12.054935 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 23:00:12.073906 systemd[1]: Started update-engine.service - Update Engine. Nov 23 23:00:12.076831 update_engine[1497]: I20251123 23:00:12.076627 1497 update_check_scheduler.cc:74] Next update check in 6m12s Nov 23 23:00:12.091169 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 23 23:00:12.094478 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 23:00:12.137965 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 23 23:00:12.139302 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 23:00:12.175665 bash[1577]: Updated "/home/core/.ssh/authorized_keys" Nov 23 23:00:12.180143 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 23:00:12.185618 systemd[1]: Starting sshkeys.service... Nov 23 23:00:12.225348 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Nov 23 23:00:12.246017 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 23 23:00:12.246057 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 23 23:00:12.246070 kernel: [drm] features: -context_init Nov 23 23:00:12.234930 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 23 23:00:12.246165 containerd[1525]: time="2025-11-23T23:00:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 23:00:12.239689 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 23 23:00:12.250413 extend-filesystems[1550]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 23 23:00:12.250413 extend-filesystems[1550]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 23 23:00:12.250413 extend-filesystems[1550]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 23 23:00:12.259924 extend-filesystems[1490]: Resized filesystem in /dev/sda9 Nov 23 23:00:12.263656 containerd[1525]: time="2025-11-23T23:00:12.252607731Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 23:00:12.250638 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 23:00:12.251398 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 23:00:12.284095 kernel: [drm] number of scanouts: 1 Nov 23 23:00:12.284167 kernel: [drm] number of cap sets: 0 Nov 23 23:00:12.284179 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Nov 23 23:00:12.286566 kernel: Console: switching to colour frame buffer device 160x50 Nov 23 23:00:12.292387 containerd[1525]: time="2025-11-23T23:00:12.290472174Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.948µs" Nov 23 23:00:12.294349 containerd[1525]: time="2025-11-23T23:00:12.293871844Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 23:00:12.294349 containerd[1525]: time="2025-11-23T23:00:12.293951057Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 23:00:12.294349 containerd[1525]: time="2025-11-23T23:00:12.294146943Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 23:00:12.294349 containerd[1525]: time="2025-11-23T23:00:12.294178162Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 23:00:12.294349 containerd[1525]: time="2025-11-23T23:00:12.294207882Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:00:12.294349 containerd[1525]: time="2025-11-23T23:00:12.294281641Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:00:12.294349 containerd[1525]: time="2025-11-23T23:00:12.294297584Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:00:12.295402 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 23 23:00:12.295738 containerd[1525]: time="2025-11-23T23:00:12.294991470Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:00:12.296528 containerd[1525]: time="2025-11-23T23:00:12.295792125Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:00:12.296627 containerd[1525]: time="2025-11-23T23:00:12.296608597Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:00:12.296690 containerd[1525]: time="2025-11-23T23:00:12.296677902Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 23:00:12.300415 containerd[1525]: time="2025-11-23T23:00:12.298605717Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 23:00:12.300415 containerd[1525]: time="2025-11-23T23:00:12.300194871Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:00:12.303321 containerd[1525]: time="2025-11-23T23:00:12.302768412Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:00:12.303321 containerd[1525]: time="2025-11-23T23:00:12.302804751Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 23:00:12.303321 containerd[1525]: time="2025-11-23T23:00:12.302893079Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 23:00:12.303321 containerd[1525]: time="2025-11-23T23:00:12.303195567Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 23:00:12.303727 containerd[1525]: time="2025-11-23T23:00:12.303705846Z" level=info msg="metadata content store policy set" policy=shared Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.313626886Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.313708096Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.313730948Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.313746849Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.313759378Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.313770741Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.313783437Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.313796590Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.313808662Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.313819567Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.313848746Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.313862732Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.314004132Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 23:00:12.314217 containerd[1525]: time="2025-11-23T23:00:12.314025943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 23:00:12.314546 containerd[1525]: time="2025-11-23T23:00:12.314040387Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 23:00:12.314546 containerd[1525]: time="2025-11-23T23:00:12.314050918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 23:00:12.314546 containerd[1525]: time="2025-11-23T23:00:12.314062324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 23:00:12.314546 containerd[1525]: time="2025-11-23T23:00:12.314072730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 23:00:12.314546 containerd[1525]: time="2025-11-23T23:00:12.314084426Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 23:00:12.314546 containerd[1525]: time="2025-11-23T23:00:12.314095041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 23:00:12.314546 containerd[1525]: time="2025-11-23T23:00:12.314107070Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 23:00:12.314546 containerd[1525]: time="2025-11-23T23:00:12.314121015Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 23:00:12.314546 containerd[1525]: time="2025-11-23T23:00:12.314132378Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 23:00:12.317531 containerd[1525]: time="2025-11-23T23:00:12.315198433Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 23:00:12.317531 containerd[1525]: time="2025-11-23T23:00:12.315233065Z" level=info msg="Start snapshots syncer" Nov 23 23:00:12.317531 containerd[1525]: time="2025-11-23T23:00:12.315265865Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 23:00:12.321919 containerd[1525]: time="2025-11-23T23:00:12.317924654Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 23:00:12.321919 containerd[1525]: time="2025-11-23T23:00:12.318104682Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 23:00:12.322101 containerd[1525]: time="2025-11-23T23:00:12.318202417Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 23:00:12.329422 containerd[1525]: time="2025-11-23T23:00:12.328936932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 23:00:12.329422 containerd[1525]: time="2025-11-23T23:00:12.329016435Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 23:00:12.329422 containerd[1525]: time="2025-11-23T23:00:12.329037414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 23:00:12.329422 containerd[1525]: time="2025-11-23T23:00:12.329053773Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 23:00:12.329422 containerd[1525]: time="2025-11-23T23:00:12.329068341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 23:00:12.329422 containerd[1525]: time="2025-11-23T23:00:12.329083992Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 23:00:12.329422 containerd[1525]: time="2025-11-23T23:00:12.329100642Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 23:00:12.329422 containerd[1525]: time="2025-11-23T23:00:12.329143974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 23:00:12.329422 containerd[1525]: time="2025-11-23T23:00:12.329161123Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 23:00:12.329422 containerd[1525]: time="2025-11-23T23:00:12.329179688Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 23:00:12.329422 containerd[1525]: time="2025-11-23T23:00:12.329222270Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:00:12.329422 containerd[1525]: time="2025-11-23T23:00:12.329239170Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:00:12.329422 containerd[1525]: time="2025-11-23T23:00:12.329252739Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:00:12.329744 containerd[1525]: time="2025-11-23T23:00:12.329267558Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:00:12.329744 containerd[1525]: time="2025-11-23T23:00:12.329281003Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 23:00:12.329744 containerd[1525]: time="2025-11-23T23:00:12.329292200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 23:00:12.329744 containerd[1525]: time="2025-11-23T23:00:12.329307601Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 23:00:12.331183 containerd[1525]: time="2025-11-23T23:00:12.329944087Z" level=info msg="runtime interface created" Nov 23 23:00:12.335510 containerd[1525]: time="2025-11-23T23:00:12.333933998Z" level=info msg="created NRI interface" Nov 23 23:00:12.335510 containerd[1525]: time="2025-11-23T23:00:12.333984406Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 23:00:12.335510 containerd[1525]: time="2025-11-23T23:00:12.334010921Z" level=info msg="Connect containerd service" Nov 23 23:00:12.335510 containerd[1525]: time="2025-11-23T23:00:12.334052837Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 23:00:12.335510 containerd[1525]: time="2025-11-23T23:00:12.334890496Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:00:12.386303 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:00:12.393466 coreos-metadata[1582]: Nov 23 23:00:12.392 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 23 23:00:12.399602 locksmithd[1556]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 23:00:12.408554 coreos-metadata[1582]: Nov 23 23:00:12.406 INFO Fetch successful Nov 23 23:00:12.408751 unknown[1582]: wrote ssh authorized keys file for user: core Nov 23 23:00:12.458631 update-ssh-keys[1599]: Updated "/home/core/.ssh/authorized_keys" Nov 23 23:00:12.461441 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 23 23:00:12.468225 systemd[1]: Finished sshkeys.service. Nov 23 23:00:12.663863 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:00:12.686973 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:00:12.687114 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:00:12.704715 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:00:12.707776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:00:12.710679 systemd-logind[1496]: New seat seat0. Nov 23 23:00:12.711156 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:00:12.715029 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 23:00:12.715444 systemd-logind[1496]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Nov 23 23:00:12.718220 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 23:00:12.719825 containerd[1525]: time="2025-11-23T23:00:12.719759388Z" level=info msg="Start subscribing containerd event" Nov 23 23:00:12.722064 containerd[1525]: time="2025-11-23T23:00:12.721415850Z" level=info msg="Start recovering state" Nov 23 23:00:12.722064 containerd[1525]: time="2025-11-23T23:00:12.721580352Z" level=info msg="Start event monitor" Nov 23 23:00:12.722064 containerd[1525]: time="2025-11-23T23:00:12.721597251Z" level=info msg="Start cni network conf syncer for default" Nov 23 23:00:12.722064 containerd[1525]: time="2025-11-23T23:00:12.721609656Z" level=info msg="Start streaming server" Nov 23 23:00:12.722064 containerd[1525]: time="2025-11-23T23:00:12.721621144Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 23:00:12.722064 containerd[1525]: time="2025-11-23T23:00:12.721630052Z" level=info msg="runtime interface starting up..." Nov 23 23:00:12.722064 containerd[1525]: time="2025-11-23T23:00:12.721638002Z" level=info msg="starting plugins..." Nov 23 23:00:12.722064 containerd[1525]: time="2025-11-23T23:00:12.722003677Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 23:00:12.727933 containerd[1525]: time="2025-11-23T23:00:12.724758994Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 23:00:12.727933 containerd[1525]: time="2025-11-23T23:00:12.724823596Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 23:00:12.727933 containerd[1525]: time="2025-11-23T23:00:12.724985974Z" level=info msg="containerd successfully booted in 0.497072s" Nov 23 23:00:12.726784 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 23:00:12.813182 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:00:12.973744 tar[1502]: linux-arm64/README.md Nov 23 23:00:12.991337 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 23:00:13.121632 systemd-networkd[1413]: eth1: Gained IPv6LL Nov 23 23:00:13.125851 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 23:00:13.127197 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 23:00:13.132600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:00:13.136044 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 23:00:13.189501 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 23:00:13.299434 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 23:00:13.336538 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 23:00:13.340637 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 23:00:13.368337 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 23:00:13.368685 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 23:00:13.371622 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 23:00:13.398858 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 23:00:13.402518 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 23:00:13.407265 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 23 23:00:13.408755 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 23:00:13.569698 systemd-networkd[1413]: eth0: Gained IPv6LL Nov 23 23:00:13.972002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:00:13.974429 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 23:00:13.978718 systemd[1]: Startup finished in 2.376s (kernel) + 5.871s (initrd) + 4.546s (userspace) = 12.793s. Nov 23 23:00:13.993103 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:00:14.548314 kubelet[1664]: E1123 23:00:14.548250 1664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:00:14.551246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:00:14.551400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:00:14.551750 systemd[1]: kubelet.service: Consumed 881ms CPU time, 260.2M memory peak. Nov 23 23:00:24.730207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 23:00:24.732725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:00:24.891505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:00:24.901802 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:00:24.952390 kubelet[1683]: E1123 23:00:24.952303 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:00:24.955832 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:00:24.955967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:00:24.956848 systemd[1]: kubelet.service: Consumed 174ms CPU time, 106.9M memory peak. Nov 23 23:00:34.980424 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 23:00:34.983295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:00:35.155157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:00:35.162709 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:00:35.208068 kubelet[1698]: E1123 23:00:35.207991 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:00:35.211595 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:00:35.211760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:00:35.212795 systemd[1]: kubelet.service: Consumed 162ms CPU time, 107.4M memory peak. Nov 23 23:00:45.230363 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 23 23:00:45.234732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:00:45.404067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:00:45.415792 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:00:45.459205 kubelet[1713]: E1123 23:00:45.459127 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:00:45.461690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:00:45.461890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:00:45.462566 systemd[1]: kubelet.service: Consumed 163ms CPU time, 104.9M memory peak. Nov 23 23:00:46.560681 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 23:00:46.562598 systemd[1]: Started sshd@0-159.69.184.20:22-139.178.68.195:55754.service - OpenSSH per-connection server daemon (139.178.68.195:55754). Nov 23 23:00:47.556093 sshd[1721]: Accepted publickey for core from 139.178.68.195 port 55754 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:47.560009 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:47.567892 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 23:00:47.569273 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 23:00:47.577571 systemd-logind[1496]: New session 1 of user core. Nov 23 23:00:47.600384 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 23:00:47.602818 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 23:00:47.615505 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 23:00:47.619488 systemd-logind[1496]: New session c1 of user core. Nov 23 23:00:47.763049 systemd[1726]: Queued start job for default target default.target. Nov 23 23:00:47.770152 systemd[1726]: Created slice app.slice - User Application Slice. Nov 23 23:00:47.770215 systemd[1726]: Reached target paths.target - Paths. Nov 23 23:00:47.770288 systemd[1726]: Reached target timers.target - Timers. Nov 23 23:00:47.772163 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 23:00:47.805223 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 23:00:47.805573 systemd[1726]: Reached target sockets.target - Sockets. Nov 23 23:00:47.805624 systemd[1726]: Reached target basic.target - Basic System. Nov 23 23:00:47.805657 systemd[1726]: Reached target default.target - Main User Target. Nov 23 23:00:47.805688 systemd[1726]: Startup finished in 177ms. Nov 23 23:00:47.805862 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 23:00:47.817693 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 23:00:48.498468 systemd[1]: Started sshd@1-159.69.184.20:22-139.178.68.195:55758.service - OpenSSH per-connection server daemon (139.178.68.195:55758). Nov 23 23:00:49.474207 sshd[1737]: Accepted publickey for core from 139.178.68.195 port 55758 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:49.476071 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:49.482419 systemd-logind[1496]: New session 2 of user core. Nov 23 23:00:49.491628 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 23:00:50.137709 sshd[1740]: Connection closed by 139.178.68.195 port 55758 Nov 23 23:00:50.138430 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:50.143470 systemd[1]: sshd@1-159.69.184.20:22-139.178.68.195:55758.service: Deactivated successfully. Nov 23 23:00:50.146425 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 23:00:50.148580 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Nov 23 23:00:50.150113 systemd-logind[1496]: Removed session 2. Nov 23 23:00:50.304570 systemd[1]: Started sshd@2-159.69.184.20:22-139.178.68.195:55760.service - OpenSSH per-connection server daemon (139.178.68.195:55760). Nov 23 23:00:51.278218 sshd[1746]: Accepted publickey for core from 139.178.68.195 port 55760 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:51.280359 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:51.287870 systemd-logind[1496]: New session 3 of user core. Nov 23 23:00:51.298696 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 23:00:51.936311 sshd[1749]: Connection closed by 139.178.68.195 port 55760 Nov 23 23:00:51.937197 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:51.943378 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Nov 23 23:00:51.943471 systemd[1]: sshd@2-159.69.184.20:22-139.178.68.195:55760.service: Deactivated successfully. Nov 23 23:00:51.947667 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 23:00:51.949869 systemd-logind[1496]: Removed session 3. Nov 23 23:00:52.102947 systemd[1]: Started sshd@3-159.69.184.20:22-139.178.68.195:56366.service - OpenSSH per-connection server daemon (139.178.68.195:56366). Nov 23 23:00:53.079998 sshd[1755]: Accepted publickey for core from 139.178.68.195 port 56366 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:53.082762 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:53.090203 systemd-logind[1496]: New session 4 of user core. Nov 23 23:00:53.102743 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 23:00:53.744128 sshd[1758]: Connection closed by 139.178.68.195 port 56366 Nov 23 23:00:53.745171 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:53.750597 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Nov 23 23:00:53.751418 systemd[1]: sshd@3-159.69.184.20:22-139.178.68.195:56366.service: Deactivated successfully. Nov 23 23:00:53.753670 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 23:00:53.755726 systemd-logind[1496]: Removed session 4. Nov 23 23:00:53.925608 systemd[1]: Started sshd@4-159.69.184.20:22-139.178.68.195:56368.service - OpenSSH per-connection server daemon (139.178.68.195:56368). Nov 23 23:00:54.925016 sshd[1764]: Accepted publickey for core from 139.178.68.195 port 56368 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:54.926779 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:54.931729 systemd-logind[1496]: New session 5 of user core. Nov 23 23:00:54.942661 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 23:00:55.449490 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 23:00:55.449825 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:00:55.472297 sudo[1768]: pam_unix(sudo:session): session closed for user root Nov 23 23:00:55.480091 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 23 23:00:55.483071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:00:55.630168 sshd[1767]: Connection closed by 139.178.68.195 port 56368 Nov 23 23:00:55.631265 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:55.636916 systemd[1]: sshd@4-159.69.184.20:22-139.178.68.195:56368.service: Deactivated successfully. Nov 23 23:00:55.639376 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 23:00:55.642411 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Nov 23 23:00:55.644177 systemd-logind[1496]: Removed session 5. Nov 23 23:00:55.649814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:00:55.669027 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:00:55.715241 kubelet[1781]: E1123 23:00:55.715073 1781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:00:55.719326 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:00:55.719512 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:00:55.721463 systemd[1]: kubelet.service: Consumed 168ms CPU time, 107M memory peak. Nov 23 23:00:55.795721 systemd[1]: Started sshd@5-159.69.184.20:22-139.178.68.195:56382.service - OpenSSH per-connection server daemon (139.178.68.195:56382). Nov 23 23:00:56.771398 sshd[1789]: Accepted publickey for core from 139.178.68.195 port 56382 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:56.773831 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:56.779294 systemd-logind[1496]: New session 6 of user core. Nov 23 23:00:56.789661 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 23:00:57.186473 update_engine[1497]: I20251123 23:00:57.185760 1497 update_attempter.cc:509] Updating boot flags... Nov 23 23:00:57.280543 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 23:00:57.280842 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:00:57.292539 sudo[1810]: pam_unix(sudo:session): session closed for user root Nov 23 23:00:57.300740 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 23:00:57.301026 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:00:57.318149 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:00:57.396862 augenrules[1836]: No rules Nov 23 23:00:57.398685 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:00:57.399071 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:00:57.401137 sudo[1809]: pam_unix(sudo:session): session closed for user root Nov 23 23:00:57.555737 sshd[1792]: Connection closed by 139.178.68.195 port 56382 Nov 23 23:00:57.556581 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:57.560875 systemd[1]: sshd@5-159.69.184.20:22-139.178.68.195:56382.service: Deactivated successfully. Nov 23 23:00:57.561176 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Nov 23 23:00:57.563662 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 23:00:57.566515 systemd-logind[1496]: Removed session 6. Nov 23 23:00:57.728596 systemd[1]: Started sshd@6-159.69.184.20:22-139.178.68.195:56386.service - OpenSSH per-connection server daemon (139.178.68.195:56386). Nov 23 23:00:58.729619 sshd[1845]: Accepted publickey for core from 139.178.68.195 port 56386 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:58.731716 sshd-session[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:58.737360 systemd-logind[1496]: New session 7 of user core. Nov 23 23:00:58.743593 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 23:00:59.247586 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 23:00:59.248071 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:00:59.583405 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 23:00:59.595104 (dockerd)[1868]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 23:00:59.819693 dockerd[1868]: time="2025-11-23T23:00:59.819289517Z" level=info msg="Starting up" Nov 23 23:00:59.823944 dockerd[1868]: time="2025-11-23T23:00:59.823895803Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 23:00:59.838033 dockerd[1868]: time="2025-11-23T23:00:59.837773120Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 23:00:59.857882 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport724343582-merged.mount: Deactivated successfully. Nov 23 23:00:59.880790 dockerd[1868]: time="2025-11-23T23:00:59.880521150Z" level=info msg="Loading containers: start." Nov 23 23:00:59.890357 kernel: Initializing XFRM netlink socket Nov 23 23:01:00.156765 systemd-networkd[1413]: docker0: Link UP Nov 23 23:01:00.164076 dockerd[1868]: time="2025-11-23T23:01:00.163923134Z" level=info msg="Loading containers: done." Nov 23 23:01:00.183851 dockerd[1868]: time="2025-11-23T23:01:00.183776893Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 23:01:00.184091 dockerd[1868]: time="2025-11-23T23:01:00.183886010Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 23:01:00.184091 dockerd[1868]: time="2025-11-23T23:01:00.184006771Z" level=info msg="Initializing buildkit" Nov 23 23:01:00.214735 dockerd[1868]: time="2025-11-23T23:01:00.214657605Z" level=info msg="Completed buildkit initialization" Nov 23 23:01:00.223142 dockerd[1868]: time="2025-11-23T23:01:00.223064907Z" level=info msg="Daemon has completed initialization" Nov 23 23:01:00.223850 dockerd[1868]: time="2025-11-23T23:01:00.223651587Z" level=info msg="API listen on /run/docker.sock" Nov 23 23:01:00.224194 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 23:01:01.423139 containerd[1525]: time="2025-11-23T23:01:01.423083291Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 23 23:01:02.070546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3306086070.mount: Deactivated successfully. Nov 23 23:01:03.241540 containerd[1525]: time="2025-11-23T23:01:03.241324835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:03.243197 containerd[1525]: time="2025-11-23T23:01:03.243135531Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=27385802" Nov 23 23:01:03.244135 containerd[1525]: time="2025-11-23T23:01:03.244031836Z" level=info msg="ImageCreate event name:\"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:03.247903 containerd[1525]: time="2025-11-23T23:01:03.247825279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:03.249947 containerd[1525]: time="2025-11-23T23:01:03.249674067Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"27382303\" in 1.826529476s" Nov 23 23:01:03.249947 containerd[1525]: time="2025-11-23T23:01:03.249723521Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\"" Nov 23 23:01:03.251688 containerd[1525]: time="2025-11-23T23:01:03.251655934Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 23 23:01:04.710354 containerd[1525]: time="2025-11-23T23:01:04.709957635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:04.711799 containerd[1525]: time="2025-11-23T23:01:04.711765827Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=23551844" Nov 23 23:01:04.713356 containerd[1525]: time="2025-11-23T23:01:04.712839771Z" level=info msg="ImageCreate event name:\"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:04.716236 containerd[1525]: time="2025-11-23T23:01:04.716198082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:04.717708 containerd[1525]: time="2025-11-23T23:01:04.717647052Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"25136308\" in 1.465951227s" Nov 23 23:01:04.717776 containerd[1525]: time="2025-11-23T23:01:04.717706749Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\"" Nov 23 23:01:04.718229 containerd[1525]: time="2025-11-23T23:01:04.718167800Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 23 23:01:05.730593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 23 23:01:05.733721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:01:05.906510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:05.919423 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:01:05.968757 kubelet[2150]: E1123 23:01:05.968694 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:01:05.972211 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:01:05.972504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:01:05.972838 systemd[1]: kubelet.service: Consumed 168ms CPU time, 107.4M memory peak. Nov 23 23:01:06.164209 containerd[1525]: time="2025-11-23T23:01:06.163633745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:06.165637 containerd[1525]: time="2025-11-23T23:01:06.165569447Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=18296716" Nov 23 23:01:06.166722 containerd[1525]: time="2025-11-23T23:01:06.166673254Z" level=info msg="ImageCreate event name:\"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:06.170111 containerd[1525]: time="2025-11-23T23:01:06.170049050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:06.171962 containerd[1525]: time="2025-11-23T23:01:06.171609415Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"19881198\" in 1.453397083s" Nov 23 23:01:06.171962 containerd[1525]: time="2025-11-23T23:01:06.171660028Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\"" Nov 23 23:01:06.172213 containerd[1525]: time="2025-11-23T23:01:06.172151116Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 23 23:01:07.147165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2986587515.mount: Deactivated successfully. Nov 23 23:01:07.447817 containerd[1525]: time="2025-11-23T23:01:07.447653912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:07.449175 containerd[1525]: time="2025-11-23T23:01:07.449119036Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=28257795" Nov 23 23:01:07.450158 containerd[1525]: time="2025-11-23T23:01:07.450104322Z" level=info msg="ImageCreate event name:\"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:07.452894 containerd[1525]: time="2025-11-23T23:01:07.452836762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:07.453922 containerd[1525]: time="2025-11-23T23:01:07.453214015Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"28256788\" in 1.28102469s" Nov 23 23:01:07.453922 containerd[1525]: time="2025-11-23T23:01:07.453243823Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\"" Nov 23 23:01:07.453922 containerd[1525]: time="2025-11-23T23:01:07.453892144Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 23 23:01:08.049452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2053730118.mount: Deactivated successfully. Nov 23 23:01:08.988639 containerd[1525]: time="2025-11-23T23:01:08.987599487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:08.988639 containerd[1525]: time="2025-11-23T23:01:08.988601446Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Nov 23 23:01:08.989553 containerd[1525]: time="2025-11-23T23:01:08.989517385Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:08.995213 containerd[1525]: time="2025-11-23T23:01:08.995136607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:08.997748 containerd[1525]: time="2025-11-23T23:01:08.997670612Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.54374478s" Nov 23 23:01:08.997978 containerd[1525]: time="2025-11-23T23:01:08.997947198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 23 23:01:08.999879 containerd[1525]: time="2025-11-23T23:01:08.998769235Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 23 23:01:09.458620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount35260430.mount: Deactivated successfully. Nov 23 23:01:09.462798 containerd[1525]: time="2025-11-23T23:01:09.462746784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:01:09.463648 containerd[1525]: time="2025-11-23T23:01:09.463591178Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Nov 23 23:01:09.464702 containerd[1525]: time="2025-11-23T23:01:09.464632657Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:01:09.467173 containerd[1525]: time="2025-11-23T23:01:09.466948388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:01:09.467678 containerd[1525]: time="2025-11-23T23:01:09.467642947Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 468.73888ms" Nov 23 23:01:09.467678 containerd[1525]: time="2025-11-23T23:01:09.467676355Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 23 23:01:09.468624 containerd[1525]: time="2025-11-23T23:01:09.468601327Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 23 23:01:10.059603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520467550.mount: Deactivated successfully. Nov 23 23:01:11.774401 containerd[1525]: time="2025-11-23T23:01:11.773520668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:11.777012 containerd[1525]: time="2025-11-23T23:01:11.776943755Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013713" Nov 23 23:01:11.778533 containerd[1525]: time="2025-11-23T23:01:11.778483282Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:11.782235 containerd[1525]: time="2025-11-23T23:01:11.782160463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:11.784592 containerd[1525]: time="2025-11-23T23:01:11.784530646Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.315795528s" Nov 23 23:01:11.784592 containerd[1525]: time="2025-11-23T23:01:11.784580136Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 23 23:01:15.979974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 23 23:01:15.982021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:01:16.143624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:16.152447 (kubelet)[2304]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:01:16.202002 kubelet[2304]: E1123 23:01:16.201949 2304 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:01:16.208649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:01:16.208800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:01:16.211442 systemd[1]: kubelet.service: Consumed 165ms CPU time, 104.9M memory peak. Nov 23 23:01:16.327696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:16.328599 systemd[1]: kubelet.service: Consumed 165ms CPU time, 104.9M memory peak. Nov 23 23:01:16.331697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:01:16.360017 systemd[1]: Reload requested from client PID 2319 ('systemctl') (unit session-7.scope)... Nov 23 23:01:16.360035 systemd[1]: Reloading... Nov 23 23:01:16.503363 zram_generator::config[2362]: No configuration found. Nov 23 23:01:16.685449 systemd[1]: Reloading finished in 324 ms. Nov 23 23:01:16.757019 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 23 23:01:16.757124 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 23 23:01:16.757547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:16.757605 systemd[1]: kubelet.service: Consumed 113ms CPU time, 94.9M memory peak. Nov 23 23:01:16.761000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:01:16.916495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:16.926909 (kubelet)[2410]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:01:16.970422 kubelet[2410]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:01:16.970422 kubelet[2410]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:01:16.970422 kubelet[2410]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:01:16.970422 kubelet[2410]: I1123 23:01:16.969616 2410 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:01:17.446826 kubelet[2410]: I1123 23:01:17.446789 2410 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 23 23:01:17.446986 kubelet[2410]: I1123 23:01:17.446975 2410 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:01:17.447265 kubelet[2410]: I1123 23:01:17.447250 2410 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 23:01:17.470859 kubelet[2410]: E1123 23:01:17.470802 2410 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://159.69.184.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 159.69.184.20:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 23 23:01:17.472610 kubelet[2410]: I1123 23:01:17.472381 2410 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:01:17.489121 kubelet[2410]: I1123 23:01:17.489077 2410 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:01:17.492054 kubelet[2410]: I1123 23:01:17.492017 2410 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:01:17.494411 kubelet[2410]: I1123 23:01:17.493686 2410 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:01:17.494411 kubelet[2410]: I1123 23:01:17.493735 2410 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-1-9-52b78fad11","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:01:17.494411 kubelet[2410]: I1123 23:01:17.493961 2410 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:01:17.494411 kubelet[2410]: I1123 23:01:17.493971 2410 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 23:01:17.494411 kubelet[2410]: I1123 23:01:17.494178 2410 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:01:17.497774 kubelet[2410]: I1123 23:01:17.497745 2410 kubelet.go:480] "Attempting to sync node with API server" Nov 23 23:01:17.498085 kubelet[2410]: I1123 23:01:17.498070 2410 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:01:17.498179 kubelet[2410]: I1123 23:01:17.498170 2410 kubelet.go:386] "Adding apiserver pod source" Nov 23 23:01:17.499902 kubelet[2410]: I1123 23:01:17.499878 2410 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:01:17.504286 kubelet[2410]: E1123 23:01:17.504253 2410 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://159.69.184.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-1-9-52b78fad11&limit=500&resourceVersion=0\": dial tcp 159.69.184.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 23 23:01:17.505026 kubelet[2410]: E1123 23:01:17.505004 2410 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://159.69.184.20:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.69.184.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 23:01:17.505346 kubelet[2410]: I1123 23:01:17.505224 2410 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:01:17.506111 kubelet[2410]: I1123 23:01:17.506096 2410 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 23:01:17.506321 kubelet[2410]: W1123 23:01:17.506311 2410 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 23:01:17.510667 kubelet[2410]: I1123 23:01:17.510648 2410 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:01:17.510783 kubelet[2410]: I1123 23:01:17.510774 2410 server.go:1289] "Started kubelet" Nov 23 23:01:17.512708 kubelet[2410]: I1123 23:01:17.512658 2410 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:01:17.513681 kubelet[2410]: I1123 23:01:17.513653 2410 server.go:317] "Adding debug handlers to kubelet server" Nov 23 23:01:17.514200 kubelet[2410]: I1123 23:01:17.514146 2410 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:01:17.514700 kubelet[2410]: I1123 23:01:17.514678 2410 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:01:17.519022 kubelet[2410]: E1123 23:01:17.514925 2410 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.69.184.20:6443/api/v1/namespaces/default/events\": dial tcp 159.69.184.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-1-9-52b78fad11.187ac51037877e32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-1-9-52b78fad11,UID:ci-4459-2-1-9-52b78fad11,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-1-9-52b78fad11,},FirstTimestamp:2025-11-23 23:01:17.51074565 +0000 UTC m=+0.578694309,LastTimestamp:2025-11-23 23:01:17.51074565 +0000 UTC m=+0.578694309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-1-9-52b78fad11,}" Nov 23 23:01:17.519022 kubelet[2410]: I1123 23:01:17.518524 2410 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:01:17.521578 kubelet[2410]: I1123 23:01:17.520314 2410 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:01:17.525115 kubelet[2410]: I1123 23:01:17.525062 2410 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 23 23:01:17.525306 kubelet[2410]: E1123 23:01:17.525289 2410 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-1-9-52b78fad11\" not found" Nov 23 23:01:17.525598 kubelet[2410]: I1123 23:01:17.525582 2410 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:01:17.525907 kubelet[2410]: I1123 23:01:17.525887 2410 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:01:17.526146 kubelet[2410]: I1123 23:01:17.526133 2410 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:01:17.526981 kubelet[2410]: E1123 23:01:17.526957 2410 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://159.69.184.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.69.184.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 23:01:17.527282 kubelet[2410]: I1123 23:01:17.527261 2410 factory.go:223] Registration of the systemd container factory successfully Nov 23 23:01:17.527514 kubelet[2410]: I1123 23:01:17.527492 2410 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:01:17.528106 kubelet[2410]: E1123 23:01:17.528089 2410 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:01:17.529447 kubelet[2410]: E1123 23:01:17.529361 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.184.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-1-9-52b78fad11?timeout=10s\": dial tcp 159.69.184.20:6443: connect: connection refused" interval="200ms" Nov 23 23:01:17.529693 kubelet[2410]: I1123 23:01:17.529676 2410 factory.go:223] Registration of the containerd container factory successfully Nov 23 23:01:17.560956 kubelet[2410]: I1123 23:01:17.560927 2410 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 23 23:01:17.561102 kubelet[2410]: I1123 23:01:17.561092 2410 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 23 23:01:17.561170 kubelet[2410]: I1123 23:01:17.561160 2410 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:01:17.561217 kubelet[2410]: I1123 23:01:17.561210 2410 kubelet.go:2436] "Starting kubelet main sync loop" Nov 23 23:01:17.561360 kubelet[2410]: E1123 23:01:17.561324 2410 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:01:17.564176 kubelet[2410]: I1123 23:01:17.564146 2410 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:01:17.564176 kubelet[2410]: I1123 23:01:17.564164 2410 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:01:17.564306 kubelet[2410]: I1123 23:01:17.564183 2410 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:01:17.564479 kubelet[2410]: E1123 23:01:17.564141 2410 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://159.69.184.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.69.184.20:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 23 23:01:17.567019 kubelet[2410]: I1123 23:01:17.566988 2410 policy_none.go:49] "None policy: Start" Nov 23 23:01:17.567019 kubelet[2410]: I1123 23:01:17.567017 2410 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:01:17.567019 kubelet[2410]: I1123 23:01:17.567030 2410 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:01:17.573135 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 23:01:17.586858 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 23:01:17.591626 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 23:01:17.604421 kubelet[2410]: E1123 23:01:17.604366 2410 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 23:01:17.604734 kubelet[2410]: I1123 23:01:17.604707 2410 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:01:17.604791 kubelet[2410]: I1123 23:01:17.604730 2410 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:01:17.605074 kubelet[2410]: I1123 23:01:17.605040 2410 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:01:17.606977 kubelet[2410]: E1123 23:01:17.606930 2410 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:01:17.607679 kubelet[2410]: E1123 23:01:17.606993 2410 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-1-9-52b78fad11\" not found" Nov 23 23:01:17.678891 systemd[1]: Created slice kubepods-burstable-pod8f99f1437857663de2448efdcbe4fe48.slice - libcontainer container kubepods-burstable-pod8f99f1437857663de2448efdcbe4fe48.slice. Nov 23 23:01:17.694285 kubelet[2410]: E1123 23:01:17.694116 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-9-52b78fad11\" not found" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.697392 systemd[1]: Created slice kubepods-burstable-pod48fb9e5ad90b2526df3a8a5b20431288.slice - libcontainer container kubepods-burstable-pod48fb9e5ad90b2526df3a8a5b20431288.slice. Nov 23 23:01:17.709807 kubelet[2410]: E1123 23:01:17.708631 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-9-52b78fad11\" not found" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.710956 kubelet[2410]: I1123 23:01:17.710395 2410 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.711122 kubelet[2410]: E1123 23:01:17.710979 2410 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.184.20:6443/api/v1/nodes\": dial tcp 159.69.184.20:6443: connect: connection refused" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.717133 systemd[1]: Created slice kubepods-burstable-pod81a5cab10aedfeb917050ccb184f5ae7.slice - libcontainer container kubepods-burstable-pod81a5cab10aedfeb917050ccb184f5ae7.slice. Nov 23 23:01:17.720030 kubelet[2410]: E1123 23:01:17.719983 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-9-52b78fad11\" not found" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.731166 kubelet[2410]: E1123 23:01:17.731106 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.184.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-1-9-52b78fad11?timeout=10s\": dial tcp 159.69.184.20:6443: connect: connection refused" interval="400ms" Nov 23 23:01:17.827851 kubelet[2410]: I1123 23:01:17.827772 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f99f1437857663de2448efdcbe4fe48-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-1-9-52b78fad11\" (UID: \"8f99f1437857663de2448efdcbe4fe48\") " pod="kube-system/kube-apiserver-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.827851 kubelet[2410]: I1123 23:01:17.827833 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48fb9e5ad90b2526df3a8a5b20431288-ca-certs\") pod \"kube-controller-manager-ci-4459-2-1-9-52b78fad11\" (UID: \"48fb9e5ad90b2526df3a8a5b20431288\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.828106 kubelet[2410]: I1123 23:01:17.827891 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48fb9e5ad90b2526df3a8a5b20431288-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-1-9-52b78fad11\" (UID: \"48fb9e5ad90b2526df3a8a5b20431288\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.828106 kubelet[2410]: I1123 23:01:17.827911 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48fb9e5ad90b2526df3a8a5b20431288-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-1-9-52b78fad11\" (UID: \"48fb9e5ad90b2526df3a8a5b20431288\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.828106 kubelet[2410]: I1123 23:01:17.827951 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48fb9e5ad90b2526df3a8a5b20431288-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-1-9-52b78fad11\" (UID: \"48fb9e5ad90b2526df3a8a5b20431288\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.828106 kubelet[2410]: I1123 23:01:17.827968 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f99f1437857663de2448efdcbe4fe48-ca-certs\") pod \"kube-apiserver-ci-4459-2-1-9-52b78fad11\" (UID: \"8f99f1437857663de2448efdcbe4fe48\") " pod="kube-system/kube-apiserver-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.828106 kubelet[2410]: I1123 23:01:17.827985 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f99f1437857663de2448efdcbe4fe48-k8s-certs\") pod \"kube-apiserver-ci-4459-2-1-9-52b78fad11\" (UID: \"8f99f1437857663de2448efdcbe4fe48\") " pod="kube-system/kube-apiserver-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.828492 kubelet[2410]: I1123 23:01:17.828015 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/48fb9e5ad90b2526df3a8a5b20431288-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-1-9-52b78fad11\" (UID: \"48fb9e5ad90b2526df3a8a5b20431288\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.828492 kubelet[2410]: I1123 23:01:17.828034 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81a5cab10aedfeb917050ccb184f5ae7-kubeconfig\") pod \"kube-scheduler-ci-4459-2-1-9-52b78fad11\" (UID: \"81a5cab10aedfeb917050ccb184f5ae7\") " pod="kube-system/kube-scheduler-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.913927 kubelet[2410]: I1123 23:01:17.913860 2410 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.914390 kubelet[2410]: E1123 23:01:17.914352 2410 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.184.20:6443/api/v1/nodes\": dial tcp 159.69.184.20:6443: connect: connection refused" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:17.997937 containerd[1525]: time="2025-11-23T23:01:17.997879647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-1-9-52b78fad11,Uid:8f99f1437857663de2448efdcbe4fe48,Namespace:kube-system,Attempt:0,}" Nov 23 23:01:18.011067 containerd[1525]: time="2025-11-23T23:01:18.011022144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-1-9-52b78fad11,Uid:48fb9e5ad90b2526df3a8a5b20431288,Namespace:kube-system,Attempt:0,}" Nov 23 23:01:18.028361 containerd[1525]: time="2025-11-23T23:01:18.027750623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-1-9-52b78fad11,Uid:81a5cab10aedfeb917050ccb184f5ae7,Namespace:kube-system,Attempt:0,}" Nov 23 23:01:18.034080 containerd[1525]: time="2025-11-23T23:01:18.033981546Z" level=info msg="connecting to shim f1c5e7d74e45fb3ec02cdd06eb746ae1146bb4cbbe089da3dcc8f43a5452605d" address="unix:///run/containerd/s/b57a2c5c5e9198f0114b6a514566bb232383264492d3e0d18643952d1bf9a0d6" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:18.043584 containerd[1525]: time="2025-11-23T23:01:18.043533784Z" level=info msg="connecting to shim 474eb53c4f8916412d299b039d35ac68b1bea8116c04dd1234e33890c36f8637" address="unix:///run/containerd/s/d9447609d5c08af04bc09b7aa50fac0b97bc4b468e48dfedca7f89053b5b7733" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:18.079086 containerd[1525]: time="2025-11-23T23:01:18.078508917Z" level=info msg="connecting to shim 0a7d698b9b529e8104a05aa36ebf6ed2635eb85bde3a9fcca7716422bd4739c7" address="unix:///run/containerd/s/348682b224f54553afac2def40072076fced42ad68379ed95e3b0c4e4843f89a" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:18.080608 systemd[1]: Started cri-containerd-474eb53c4f8916412d299b039d35ac68b1bea8116c04dd1234e33890c36f8637.scope - libcontainer container 474eb53c4f8916412d299b039d35ac68b1bea8116c04dd1234e33890c36f8637. Nov 23 23:01:18.091506 systemd[1]: Started cri-containerd-f1c5e7d74e45fb3ec02cdd06eb746ae1146bb4cbbe089da3dcc8f43a5452605d.scope - libcontainer container f1c5e7d74e45fb3ec02cdd06eb746ae1146bb4cbbe089da3dcc8f43a5452605d. Nov 23 23:01:18.119593 systemd[1]: Started cri-containerd-0a7d698b9b529e8104a05aa36ebf6ed2635eb85bde3a9fcca7716422bd4739c7.scope - libcontainer container 0a7d698b9b529e8104a05aa36ebf6ed2635eb85bde3a9fcca7716422bd4739c7. Nov 23 23:01:18.132559 kubelet[2410]: E1123 23:01:18.132471 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.184.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-1-9-52b78fad11?timeout=10s\": dial tcp 159.69.184.20:6443: connect: connection refused" interval="800ms" Nov 23 23:01:18.174632 containerd[1525]: time="2025-11-23T23:01:18.174199089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-1-9-52b78fad11,Uid:8f99f1437857663de2448efdcbe4fe48,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1c5e7d74e45fb3ec02cdd06eb746ae1146bb4cbbe089da3dcc8f43a5452605d\"" Nov 23 23:01:18.181975 containerd[1525]: time="2025-11-23T23:01:18.181931383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-1-9-52b78fad11,Uid:48fb9e5ad90b2526df3a8a5b20431288,Namespace:kube-system,Attempt:0,} returns sandbox id \"474eb53c4f8916412d299b039d35ac68b1bea8116c04dd1234e33890c36f8637\"" Nov 23 23:01:18.186888 containerd[1525]: time="2025-11-23T23:01:18.186785275Z" level=info msg="CreateContainer within sandbox \"f1c5e7d74e45fb3ec02cdd06eb746ae1146bb4cbbe089da3dcc8f43a5452605d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 23:01:18.188361 containerd[1525]: time="2025-11-23T23:01:18.188248520Z" level=info msg="CreateContainer within sandbox \"474eb53c4f8916412d299b039d35ac68b1bea8116c04dd1234e33890c36f8637\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 23:01:18.201408 containerd[1525]: time="2025-11-23T23:01:18.201361435Z" level=info msg="Container dae18f5d8d6fe6160dbd3da7036b0f294cad3611872fea540783297f0a68ff3d: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:18.207468 containerd[1525]: time="2025-11-23T23:01:18.207420689Z" level=info msg="Container f6d238a317a71d1d1c778b3077c09cdc3a981b9a440eb13d3b9017b75e779095: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:18.217077 containerd[1525]: time="2025-11-23T23:01:18.216916197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-1-9-52b78fad11,Uid:81a5cab10aedfeb917050ccb184f5ae7,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a7d698b9b529e8104a05aa36ebf6ed2635eb85bde3a9fcca7716422bd4739c7\"" Nov 23 23:01:18.220454 containerd[1525]: time="2025-11-23T23:01:18.220297643Z" level=info msg="CreateContainer within sandbox \"474eb53c4f8916412d299b039d35ac68b1bea8116c04dd1234e33890c36f8637\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dae18f5d8d6fe6160dbd3da7036b0f294cad3611872fea540783297f0a68ff3d\"" Nov 23 23:01:18.221424 containerd[1525]: time="2025-11-23T23:01:18.221285609Z" level=info msg="StartContainer for \"dae18f5d8d6fe6160dbd3da7036b0f294cad3611872fea540783297f0a68ff3d\"" Nov 23 23:01:18.222881 containerd[1525]: time="2025-11-23T23:01:18.222847030Z" level=info msg="connecting to shim dae18f5d8d6fe6160dbd3da7036b0f294cad3611872fea540783297f0a68ff3d" address="unix:///run/containerd/s/d9447609d5c08af04bc09b7aa50fac0b97bc4b468e48dfedca7f89053b5b7733" protocol=ttrpc version=3 Nov 23 23:01:18.223953 containerd[1525]: time="2025-11-23T23:01:18.223915689Z" level=info msg="CreateContainer within sandbox \"0a7d698b9b529e8104a05aa36ebf6ed2635eb85bde3a9fcca7716422bd4739c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 23:01:18.226653 containerd[1525]: time="2025-11-23T23:01:18.226470316Z" level=info msg="CreateContainer within sandbox \"f1c5e7d74e45fb3ec02cdd06eb746ae1146bb4cbbe089da3dcc8f43a5452605d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f6d238a317a71d1d1c778b3077c09cdc3a981b9a440eb13d3b9017b75e779095\"" Nov 23 23:01:18.227710 containerd[1525]: time="2025-11-23T23:01:18.227681679Z" level=info msg="StartContainer for \"f6d238a317a71d1d1c778b3077c09cdc3a981b9a440eb13d3b9017b75e779095\"" Nov 23 23:01:18.228799 containerd[1525]: time="2025-11-23T23:01:18.228770061Z" level=info msg="connecting to shim f6d238a317a71d1d1c778b3077c09cdc3a981b9a440eb13d3b9017b75e779095" address="unix:///run/containerd/s/b57a2c5c5e9198f0114b6a514566bb232383264492d3e0d18643952d1bf9a0d6" protocol=ttrpc version=3 Nov 23 23:01:18.239629 containerd[1525]: time="2025-11-23T23:01:18.239554666Z" level=info msg="Container cfc512753b06ec8d1044a390a7c05285b55eef7d3e1a9af3eae1d9cf91a4f955: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:18.249679 systemd[1]: Started cri-containerd-dae18f5d8d6fe6160dbd3da7036b0f294cad3611872fea540783297f0a68ff3d.scope - libcontainer container dae18f5d8d6fe6160dbd3da7036b0f294cad3611872fea540783297f0a68ff3d. Nov 23 23:01:18.259399 containerd[1525]: time="2025-11-23T23:01:18.259230438Z" level=info msg="CreateContainer within sandbox \"0a7d698b9b529e8104a05aa36ebf6ed2635eb85bde3a9fcca7716422bd4739c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cfc512753b06ec8d1044a390a7c05285b55eef7d3e1a9af3eae1d9cf91a4f955\"" Nov 23 23:01:18.261289 containerd[1525]: time="2025-11-23T23:01:18.261254297Z" level=info msg="StartContainer for \"cfc512753b06ec8d1044a390a7c05285b55eef7d3e1a9af3eae1d9cf91a4f955\"" Nov 23 23:01:18.262396 containerd[1525]: time="2025-11-23T23:01:18.262364043Z" level=info msg="connecting to shim cfc512753b06ec8d1044a390a7c05285b55eef7d3e1a9af3eae1d9cf91a4f955" address="unix:///run/containerd/s/348682b224f54553afac2def40072076fced42ad68379ed95e3b0c4e4843f89a" protocol=ttrpc version=3 Nov 23 23:01:18.265583 systemd[1]: Started cri-containerd-f6d238a317a71d1d1c778b3077c09cdc3a981b9a440eb13d3b9017b75e779095.scope - libcontainer container f6d238a317a71d1d1c778b3077c09cdc3a981b9a440eb13d3b9017b75e779095. Nov 23 23:01:18.291001 systemd[1]: Started cri-containerd-cfc512753b06ec8d1044a390a7c05285b55eef7d3e1a9af3eae1d9cf91a4f955.scope - libcontainer container cfc512753b06ec8d1044a390a7c05285b55eef7d3e1a9af3eae1d9cf91a4f955. Nov 23 23:01:18.319256 kubelet[2410]: I1123 23:01:18.319227 2410 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:18.321820 kubelet[2410]: E1123 23:01:18.321780 2410 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.184.20:6443/api/v1/nodes\": dial tcp 159.69.184.20:6443: connect: connection refused" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:18.340771 containerd[1525]: time="2025-11-23T23:01:18.340634100Z" level=info msg="StartContainer for \"dae18f5d8d6fe6160dbd3da7036b0f294cad3611872fea540783297f0a68ff3d\" returns successfully" Nov 23 23:01:18.358029 containerd[1525]: time="2025-11-23T23:01:18.357736922Z" level=info msg="StartContainer for \"f6d238a317a71d1d1c778b3077c09cdc3a981b9a440eb13d3b9017b75e779095\" returns successfully" Nov 23 23:01:18.388699 containerd[1525]: time="2025-11-23T23:01:18.388625891Z" level=info msg="StartContainer for \"cfc512753b06ec8d1044a390a7c05285b55eef7d3e1a9af3eae1d9cf91a4f955\" returns successfully" Nov 23 23:01:18.572639 kubelet[2410]: E1123 23:01:18.571962 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-9-52b78fad11\" not found" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:18.578687 kubelet[2410]: E1123 23:01:18.578123 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-9-52b78fad11\" not found" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:18.584240 kubelet[2410]: E1123 23:01:18.583958 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-9-52b78fad11\" not found" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:19.125058 kubelet[2410]: I1123 23:01:19.123847 2410 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:19.583557 kubelet[2410]: E1123 23:01:19.583057 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-9-52b78fad11\" not found" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:19.583557 kubelet[2410]: E1123 23:01:19.583399 2410 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-9-52b78fad11\" not found" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:20.293635 kubelet[2410]: E1123 23:01:20.293587 2410 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-1-9-52b78fad11\" not found" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:20.396604 kubelet[2410]: I1123 23:01:20.396558 2410 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:20.430414 kubelet[2410]: I1123 23:01:20.430283 2410 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:20.445359 kubelet[2410]: E1123 23:01:20.444455 2410 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-1-9-52b78fad11\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:20.445639 kubelet[2410]: I1123 23:01:20.445495 2410 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:20.448669 kubelet[2410]: E1123 23:01:20.448640 2410 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-1-9-52b78fad11\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:20.448905 kubelet[2410]: I1123 23:01:20.448710 2410 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:20.453612 kubelet[2410]: E1123 23:01:20.453571 2410 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-1-9-52b78fad11\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:20.506350 kubelet[2410]: I1123 23:01:20.506146 2410 apiserver.go:52] "Watching apiserver" Nov 23 23:01:20.526609 kubelet[2410]: I1123 23:01:20.526575 2410 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:01:20.583944 kubelet[2410]: I1123 23:01:20.583611 2410 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:20.588569 kubelet[2410]: E1123 23:01:20.588532 2410 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-1-9-52b78fad11\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:21.812900 kubelet[2410]: I1123 23:01:21.812850 2410 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:22.888729 systemd[1]: Reload requested from client PID 2693 ('systemctl') (unit session-7.scope)... Nov 23 23:01:22.888748 systemd[1]: Reloading... Nov 23 23:01:22.995366 zram_generator::config[2740]: No configuration found. Nov 23 23:01:23.187467 systemd[1]: Reloading finished in 298 ms. Nov 23 23:01:23.220064 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:01:23.233853 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:01:23.235447 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:23.235535 systemd[1]: kubelet.service: Consumed 1.017s CPU time, 125.5M memory peak. Nov 23 23:01:23.239311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:01:23.401069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:23.410969 (kubelet)[2782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:01:23.462710 kubelet[2782]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:01:23.463003 kubelet[2782]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:01:23.463050 kubelet[2782]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:01:23.463202 kubelet[2782]: I1123 23:01:23.463170 2782 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:01:23.477070 kubelet[2782]: I1123 23:01:23.477029 2782 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 23 23:01:23.477594 kubelet[2782]: I1123 23:01:23.477561 2782 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:01:23.479346 kubelet[2782]: I1123 23:01:23.478069 2782 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 23:01:23.479533 kubelet[2782]: I1123 23:01:23.479518 2782 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 23 23:01:23.482067 kubelet[2782]: I1123 23:01:23.482025 2782 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:01:23.487485 kubelet[2782]: I1123 23:01:23.487457 2782 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:01:23.491227 kubelet[2782]: I1123 23:01:23.491199 2782 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:01:23.491539 kubelet[2782]: I1123 23:01:23.491512 2782 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:01:23.491758 kubelet[2782]: I1123 23:01:23.491541 2782 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-1-9-52b78fad11","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:01:23.491844 kubelet[2782]: I1123 23:01:23.491766 2782 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:01:23.491844 kubelet[2782]: I1123 23:01:23.491776 2782 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 23:01:23.491844 kubelet[2782]: I1123 23:01:23.491827 2782 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:01:23.492047 kubelet[2782]: I1123 23:01:23.492032 2782 kubelet.go:480] "Attempting to sync node with API server" Nov 23 23:01:23.492085 kubelet[2782]: I1123 23:01:23.492053 2782 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:01:23.492378 kubelet[2782]: I1123 23:01:23.492361 2782 kubelet.go:386] "Adding apiserver pod source" Nov 23 23:01:23.492417 kubelet[2782]: I1123 23:01:23.492389 2782 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:01:23.494487 kubelet[2782]: I1123 23:01:23.494455 2782 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:01:23.495034 kubelet[2782]: I1123 23:01:23.495013 2782 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 23:01:23.500952 kubelet[2782]: I1123 23:01:23.500048 2782 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:01:23.500952 kubelet[2782]: I1123 23:01:23.500091 2782 server.go:1289] "Started kubelet" Nov 23 23:01:23.502662 kubelet[2782]: I1123 23:01:23.502561 2782 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:01:23.503993 kubelet[2782]: I1123 23:01:23.503928 2782 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:01:23.515624 kubelet[2782]: I1123 23:01:23.513851 2782 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:01:23.515624 kubelet[2782]: I1123 23:01:23.514915 2782 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:01:23.517171 kubelet[2782]: I1123 23:01:23.516617 2782 server.go:317] "Adding debug handlers to kubelet server" Nov 23 23:01:23.526494 kubelet[2782]: I1123 23:01:23.526451 2782 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:01:23.528130 kubelet[2782]: I1123 23:01:23.528107 2782 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:01:23.530467 kubelet[2782]: E1123 23:01:23.530425 2782 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-1-9-52b78fad11\" not found" Nov 23 23:01:23.531058 kubelet[2782]: I1123 23:01:23.530979 2782 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:01:23.533136 kubelet[2782]: I1123 23:01:23.533077 2782 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:01:23.534616 kubelet[2782]: I1123 23:01:23.534544 2782 factory.go:223] Registration of the systemd container factory successfully Nov 23 23:01:23.534814 kubelet[2782]: I1123 23:01:23.534794 2782 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:01:23.541897 kubelet[2782]: I1123 23:01:23.541863 2782 factory.go:223] Registration of the containerd container factory successfully Nov 23 23:01:23.558405 kubelet[2782]: I1123 23:01:23.558338 2782 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 23 23:01:23.559699 kubelet[2782]: I1123 23:01:23.559677 2782 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 23 23:01:23.559699 kubelet[2782]: I1123 23:01:23.559696 2782 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 23 23:01:23.559809 kubelet[2782]: I1123 23:01:23.559715 2782 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:01:23.559809 kubelet[2782]: I1123 23:01:23.559722 2782 kubelet.go:2436] "Starting kubelet main sync loop" Nov 23 23:01:23.559809 kubelet[2782]: E1123 23:01:23.559761 2782 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:01:23.599163 kubelet[2782]: I1123 23:01:23.599133 2782 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:01:23.599163 kubelet[2782]: I1123 23:01:23.599155 2782 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:01:23.599364 kubelet[2782]: I1123 23:01:23.599178 2782 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:01:23.599462 kubelet[2782]: I1123 23:01:23.599443 2782 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 23:01:23.599491 kubelet[2782]: I1123 23:01:23.599462 2782 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 23:01:23.599491 kubelet[2782]: I1123 23:01:23.599480 2782 policy_none.go:49] "None policy: Start" Nov 23 23:01:23.599491 kubelet[2782]: I1123 23:01:23.599489 2782 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:01:23.599552 kubelet[2782]: I1123 23:01:23.599501 2782 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:01:23.599607 kubelet[2782]: I1123 23:01:23.599595 2782 state_mem.go:75] "Updated machine memory state" Nov 23 23:01:23.603815 kubelet[2782]: E1123 23:01:23.603783 2782 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 23:01:23.603972 kubelet[2782]: I1123 23:01:23.603948 2782 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:01:23.604009 kubelet[2782]: I1123 23:01:23.603967 2782 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:01:23.604495 kubelet[2782]: I1123 23:01:23.604474 2782 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:01:23.606612 kubelet[2782]: E1123 23:01:23.606592 2782 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:01:23.663382 kubelet[2782]: I1123 23:01:23.661617 2782 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.663382 kubelet[2782]: I1123 23:01:23.661994 2782 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.663382 kubelet[2782]: I1123 23:01:23.662200 2782 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.673287 kubelet[2782]: E1123 23:01:23.673235 2782 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-1-9-52b78fad11\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.708160 kubelet[2782]: I1123 23:01:23.708128 2782 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.719684 kubelet[2782]: I1123 23:01:23.719577 2782 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.719684 kubelet[2782]: I1123 23:01:23.719674 2782 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.734175 kubelet[2782]: I1123 23:01:23.734077 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f99f1437857663de2448efdcbe4fe48-ca-certs\") pod \"kube-apiserver-ci-4459-2-1-9-52b78fad11\" (UID: \"8f99f1437857663de2448efdcbe4fe48\") " pod="kube-system/kube-apiserver-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.734175 kubelet[2782]: I1123 23:01:23.734141 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/48fb9e5ad90b2526df3a8a5b20431288-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-1-9-52b78fad11\" (UID: \"48fb9e5ad90b2526df3a8a5b20431288\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.734175 kubelet[2782]: I1123 23:01:23.734180 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/48fb9e5ad90b2526df3a8a5b20431288-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-1-9-52b78fad11\" (UID: \"48fb9e5ad90b2526df3a8a5b20431288\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.734555 kubelet[2782]: I1123 23:01:23.734208 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/81a5cab10aedfeb917050ccb184f5ae7-kubeconfig\") pod \"kube-scheduler-ci-4459-2-1-9-52b78fad11\" (UID: \"81a5cab10aedfeb917050ccb184f5ae7\") " pod="kube-system/kube-scheduler-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.734555 kubelet[2782]: I1123 23:01:23.734238 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f99f1437857663de2448efdcbe4fe48-k8s-certs\") pod \"kube-apiserver-ci-4459-2-1-9-52b78fad11\" (UID: \"8f99f1437857663de2448efdcbe4fe48\") " pod="kube-system/kube-apiserver-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.734555 kubelet[2782]: I1123 23:01:23.734265 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f99f1437857663de2448efdcbe4fe48-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-1-9-52b78fad11\" (UID: \"8f99f1437857663de2448efdcbe4fe48\") " pod="kube-system/kube-apiserver-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.734555 kubelet[2782]: I1123 23:01:23.734293 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/48fb9e5ad90b2526df3a8a5b20431288-ca-certs\") pod \"kube-controller-manager-ci-4459-2-1-9-52b78fad11\" (UID: \"48fb9e5ad90b2526df3a8a5b20431288\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.735365 kubelet[2782]: I1123 23:01:23.734320 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48fb9e5ad90b2526df3a8a5b20431288-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-1-9-52b78fad11\" (UID: \"48fb9e5ad90b2526df3a8a5b20431288\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:23.735842 kubelet[2782]: I1123 23:01:23.735624 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/48fb9e5ad90b2526df3a8a5b20431288-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-1-9-52b78fad11\" (UID: \"48fb9e5ad90b2526df3a8a5b20431288\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:24.506050 kubelet[2782]: I1123 23:01:24.505962 2782 apiserver.go:52] "Watching apiserver" Nov 23 23:01:24.531695 kubelet[2782]: I1123 23:01:24.531630 2782 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:01:24.578982 kubelet[2782]: I1123 23:01:24.578633 2782 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:24.578982 kubelet[2782]: I1123 23:01:24.578902 2782 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:24.595637 kubelet[2782]: E1123 23:01:24.595591 2782 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-1-9-52b78fad11\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:24.596786 kubelet[2782]: E1123 23:01:24.596751 2782 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-1-9-52b78fad11\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-1-9-52b78fad11" Nov 23 23:01:24.613974 kubelet[2782]: I1123 23:01:24.611498 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-1-9-52b78fad11" podStartSLOduration=3.611392802 podStartE2EDuration="3.611392802s" podCreationTimestamp="2025-11-23 23:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:01:24.611344876 +0000 UTC m=+1.193750660" watchObservedRunningTime="2025-11-23 23:01:24.611392802 +0000 UTC m=+1.193798586" Nov 23 23:01:24.624736 kubelet[2782]: I1123 23:01:24.624669 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-1-9-52b78fad11" podStartSLOduration=1.624637163 podStartE2EDuration="1.624637163s" podCreationTimestamp="2025-11-23 23:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:01:24.6236253 +0000 UTC m=+1.206031084" watchObservedRunningTime="2025-11-23 23:01:24.624637163 +0000 UTC m=+1.207042947" Nov 23 23:01:24.645431 kubelet[2782]: I1123 23:01:24.645214 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-1-9-52b78fad11" podStartSLOduration=1.645197203 podStartE2EDuration="1.645197203s" podCreationTimestamp="2025-11-23 23:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:01:24.644280753 +0000 UTC m=+1.226686537" watchObservedRunningTime="2025-11-23 23:01:24.645197203 +0000 UTC m=+1.227602987" Nov 23 23:01:28.860248 kubelet[2782]: I1123 23:01:28.860047 2782 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 23:01:28.861744 kubelet[2782]: I1123 23:01:28.860907 2782 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 23:01:28.861791 containerd[1525]: time="2025-11-23T23:01:28.860630380Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 23:01:29.733178 systemd[1]: Created slice kubepods-besteffort-poda28b6450_0bfc_4eb3_9d7f_e06baa27450f.slice - libcontainer container kubepods-besteffort-poda28b6450_0bfc_4eb3_9d7f_e06baa27450f.slice. Nov 23 23:01:29.778755 kubelet[2782]: I1123 23:01:29.778499 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a28b6450-0bfc-4eb3-9d7f-e06baa27450f-kube-proxy\") pod \"kube-proxy-qsl58\" (UID: \"a28b6450-0bfc-4eb3-9d7f-e06baa27450f\") " pod="kube-system/kube-proxy-qsl58" Nov 23 23:01:29.778755 kubelet[2782]: I1123 23:01:29.778600 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a28b6450-0bfc-4eb3-9d7f-e06baa27450f-xtables-lock\") pod \"kube-proxy-qsl58\" (UID: \"a28b6450-0bfc-4eb3-9d7f-e06baa27450f\") " pod="kube-system/kube-proxy-qsl58" Nov 23 23:01:29.778755 kubelet[2782]: I1123 23:01:29.778623 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a28b6450-0bfc-4eb3-9d7f-e06baa27450f-lib-modules\") pod \"kube-proxy-qsl58\" (UID: \"a28b6450-0bfc-4eb3-9d7f-e06baa27450f\") " pod="kube-system/kube-proxy-qsl58" Nov 23 23:01:29.778755 kubelet[2782]: I1123 23:01:29.778645 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxv27\" (UniqueName: \"kubernetes.io/projected/a28b6450-0bfc-4eb3-9d7f-e06baa27450f-kube-api-access-mxv27\") pod \"kube-proxy-qsl58\" (UID: \"a28b6450-0bfc-4eb3-9d7f-e06baa27450f\") " pod="kube-system/kube-proxy-qsl58" Nov 23 23:01:30.043885 containerd[1525]: time="2025-11-23T23:01:30.043766040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qsl58,Uid:a28b6450-0bfc-4eb3-9d7f-e06baa27450f,Namespace:kube-system,Attempt:0,}" Nov 23 23:01:30.074360 containerd[1525]: time="2025-11-23T23:01:30.074010376Z" level=info msg="connecting to shim 5935bbfa9435cb8861bd1ed99df8a3f0c665e587b2c49e27e864f314923500d3" address="unix:///run/containerd/s/55dcb53bde24922ba8dfd0f9c102365d4479716425a01755e7f6dfafa4fe4043" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:30.120597 systemd[1]: Started cri-containerd-5935bbfa9435cb8861bd1ed99df8a3f0c665e587b2c49e27e864f314923500d3.scope - libcontainer container 5935bbfa9435cb8861bd1ed99df8a3f0c665e587b2c49e27e864f314923500d3. Nov 23 23:01:30.137689 systemd[1]: Created slice kubepods-besteffort-pod2f3cda66_afbc_4e0a_9212_788d8a83c058.slice - libcontainer container kubepods-besteffort-pod2f3cda66_afbc_4e0a_9212_788d8a83c058.slice. Nov 23 23:01:30.173057 containerd[1525]: time="2025-11-23T23:01:30.173019375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qsl58,Uid:a28b6450-0bfc-4eb3-9d7f-e06baa27450f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5935bbfa9435cb8861bd1ed99df8a3f0c665e587b2c49e27e864f314923500d3\"" Nov 23 23:01:30.179202 containerd[1525]: time="2025-11-23T23:01:30.179102614Z" level=info msg="CreateContainer within sandbox \"5935bbfa9435cb8861bd1ed99df8a3f0c665e587b2c49e27e864f314923500d3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 23:01:30.181312 kubelet[2782]: I1123 23:01:30.181270 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f3cda66-afbc-4e0a-9212-788d8a83c058-var-lib-calico\") pod \"tigera-operator-7dcd859c48-cl4h9\" (UID: \"2f3cda66-afbc-4e0a-9212-788d8a83c058\") " pod="tigera-operator/tigera-operator-7dcd859c48-cl4h9" Nov 23 23:01:30.181312 kubelet[2782]: I1123 23:01:30.181317 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qznhr\" (UniqueName: \"kubernetes.io/projected/2f3cda66-afbc-4e0a-9212-788d8a83c058-kube-api-access-qznhr\") pod \"tigera-operator-7dcd859c48-cl4h9\" (UID: \"2f3cda66-afbc-4e0a-9212-788d8a83c058\") " pod="tigera-operator/tigera-operator-7dcd859c48-cl4h9" Nov 23 23:01:30.194666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1354937895.mount: Deactivated successfully. Nov 23 23:01:30.194815 containerd[1525]: time="2025-11-23T23:01:30.194734045Z" level=info msg="Container c9d63256541e7b21628b6b5e8e21ef6b0df3dbdab81fb0b3cbbd07362f0a9fd5: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:30.206025 containerd[1525]: time="2025-11-23T23:01:30.205838712Z" level=info msg="CreateContainer within sandbox \"5935bbfa9435cb8861bd1ed99df8a3f0c665e587b2c49e27e864f314923500d3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c9d63256541e7b21628b6b5e8e21ef6b0df3dbdab81fb0b3cbbd07362f0a9fd5\"" Nov 23 23:01:30.206869 containerd[1525]: time="2025-11-23T23:01:30.206706340Z" level=info msg="StartContainer for \"c9d63256541e7b21628b6b5e8e21ef6b0df3dbdab81fb0b3cbbd07362f0a9fd5\"" Nov 23 23:01:30.208912 containerd[1525]: time="2025-11-23T23:01:30.208873610Z" level=info msg="connecting to shim c9d63256541e7b21628b6b5e8e21ef6b0df3dbdab81fb0b3cbbd07362f0a9fd5" address="unix:///run/containerd/s/55dcb53bde24922ba8dfd0f9c102365d4479716425a01755e7f6dfafa4fe4043" protocol=ttrpc version=3 Nov 23 23:01:30.229562 systemd[1]: Started cri-containerd-c9d63256541e7b21628b6b5e8e21ef6b0df3dbdab81fb0b3cbbd07362f0a9fd5.scope - libcontainer container c9d63256541e7b21628b6b5e8e21ef6b0df3dbdab81fb0b3cbbd07362f0a9fd5. Nov 23 23:01:30.316875 containerd[1525]: time="2025-11-23T23:01:30.316398673Z" level=info msg="StartContainer for \"c9d63256541e7b21628b6b5e8e21ef6b0df3dbdab81fb0b3cbbd07362f0a9fd5\" returns successfully" Nov 23 23:01:30.442954 containerd[1525]: time="2025-11-23T23:01:30.442907424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cl4h9,Uid:2f3cda66-afbc-4e0a-9212-788d8a83c058,Namespace:tigera-operator,Attempt:0,}" Nov 23 23:01:30.464710 containerd[1525]: time="2025-11-23T23:01:30.464354142Z" level=info msg="connecting to shim 24b2a11a2740fb9ac401dcfafc8fd6127c184543193bb40f030fd83a91015812" address="unix:///run/containerd/s/22f79a000d9cf82fa8cf619a8879880dcddf20e3c9841bf05c8b439bf64423a5" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:30.495575 systemd[1]: Started cri-containerd-24b2a11a2740fb9ac401dcfafc8fd6127c184543193bb40f030fd83a91015812.scope - libcontainer container 24b2a11a2740fb9ac401dcfafc8fd6127c184543193bb40f030fd83a91015812. Nov 23 23:01:30.556134 containerd[1525]: time="2025-11-23T23:01:30.556058589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cl4h9,Uid:2f3cda66-afbc-4e0a-9212-788d8a83c058,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"24b2a11a2740fb9ac401dcfafc8fd6127c184543193bb40f030fd83a91015812\"" Nov 23 23:01:30.560117 containerd[1525]: time="2025-11-23T23:01:30.560056448Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 23:01:32.178036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2496950184.mount: Deactivated successfully. Nov 23 23:01:32.574381 containerd[1525]: time="2025-11-23T23:01:32.574296687Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:32.575634 containerd[1525]: time="2025-11-23T23:01:32.575351334Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 23:01:32.576654 containerd[1525]: time="2025-11-23T23:01:32.576606245Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:32.579550 containerd[1525]: time="2025-11-23T23:01:32.579511635Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:32.580745 containerd[1525]: time="2025-11-23T23:01:32.580315372Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.020189156s" Nov 23 23:01:32.580745 containerd[1525]: time="2025-11-23T23:01:32.580372339Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 23:01:32.585682 containerd[1525]: time="2025-11-23T23:01:32.585646694Z" level=info msg="CreateContainer within sandbox \"24b2a11a2740fb9ac401dcfafc8fd6127c184543193bb40f030fd83a91015812\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 23:01:32.594832 containerd[1525]: time="2025-11-23T23:01:32.594794396Z" level=info msg="Container 46ad6a4660aa5d0b7b7f28e59100e80954756f03500f16025864e3ae6cef15be: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:32.606789 containerd[1525]: time="2025-11-23T23:01:32.606735794Z" level=info msg="CreateContainer within sandbox \"24b2a11a2740fb9ac401dcfafc8fd6127c184543193bb40f030fd83a91015812\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"46ad6a4660aa5d0b7b7f28e59100e80954756f03500f16025864e3ae6cef15be\"" Nov 23 23:01:32.607782 containerd[1525]: time="2025-11-23T23:01:32.607598018Z" level=info msg="StartContainer for \"46ad6a4660aa5d0b7b7f28e59100e80954756f03500f16025864e3ae6cef15be\"" Nov 23 23:01:32.610080 containerd[1525]: time="2025-11-23T23:01:32.610009468Z" level=info msg="connecting to shim 46ad6a4660aa5d0b7b7f28e59100e80954756f03500f16025864e3ae6cef15be" address="unix:///run/containerd/s/22f79a000d9cf82fa8cf619a8879880dcddf20e3c9841bf05c8b439bf64423a5" protocol=ttrpc version=3 Nov 23 23:01:32.644621 systemd[1]: Started cri-containerd-46ad6a4660aa5d0b7b7f28e59100e80954756f03500f16025864e3ae6cef15be.scope - libcontainer container 46ad6a4660aa5d0b7b7f28e59100e80954756f03500f16025864e3ae6cef15be. Nov 23 23:01:32.685005 containerd[1525]: time="2025-11-23T23:01:32.684507479Z" level=info msg="StartContainer for \"46ad6a4660aa5d0b7b7f28e59100e80954756f03500f16025864e3ae6cef15be\" returns successfully" Nov 23 23:01:33.621212 kubelet[2782]: I1123 23:01:33.621095 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qsl58" podStartSLOduration=4.620316498 podStartE2EDuration="4.620316498s" podCreationTimestamp="2025-11-23 23:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:01:30.609695604 +0000 UTC m=+7.192101428" watchObservedRunningTime="2025-11-23 23:01:33.620316498 +0000 UTC m=+10.202722282" Nov 23 23:01:34.880495 kubelet[2782]: I1123 23:01:34.880258 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-cl4h9" podStartSLOduration=2.8571846340000002 podStartE2EDuration="4.880240543s" podCreationTimestamp="2025-11-23 23:01:30 +0000 UTC" firstStartedPulling="2025-11-23 23:01:30.558258703 +0000 UTC m=+7.140664487" lastFinishedPulling="2025-11-23 23:01:32.581314652 +0000 UTC m=+9.163720396" observedRunningTime="2025-11-23 23:01:33.621348821 +0000 UTC m=+10.203754605" watchObservedRunningTime="2025-11-23 23:01:34.880240543 +0000 UTC m=+11.462646327" Nov 23 23:01:38.962765 sudo[1849]: pam_unix(sudo:session): session closed for user root Nov 23 23:01:39.120695 sshd[1848]: Connection closed by 139.178.68.195 port 56386 Nov 23 23:01:39.121184 sshd-session[1845]: pam_unix(sshd:session): session closed for user core Nov 23 23:01:39.127528 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 23:01:39.127546 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Nov 23 23:01:39.128253 systemd[1]: session-7.scope: Consumed 6.453s CPU time, 221.2M memory peak. Nov 23 23:01:39.129477 systemd[1]: sshd@6-159.69.184.20:22-139.178.68.195:56386.service: Deactivated successfully. Nov 23 23:01:39.138479 systemd-logind[1496]: Removed session 7. Nov 23 23:01:50.498925 systemd[1]: Created slice kubepods-besteffort-pod12b107fc_8a0f_4cf8_bff0_f41a1bd0c18e.slice - libcontainer container kubepods-besteffort-pod12b107fc_8a0f_4cf8_bff0_f41a1bd0c18e.slice. Nov 23 23:01:50.509984 kubelet[2782]: I1123 23:01:50.509921 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c9sj\" (UniqueName: \"kubernetes.io/projected/12b107fc-8a0f-4cf8-bff0-f41a1bd0c18e-kube-api-access-8c9sj\") pod \"calico-typha-7458994975-mhjm8\" (UID: \"12b107fc-8a0f-4cf8-bff0-f41a1bd0c18e\") " pod="calico-system/calico-typha-7458994975-mhjm8" Nov 23 23:01:50.509984 kubelet[2782]: I1123 23:01:50.509981 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12b107fc-8a0f-4cf8-bff0-f41a1bd0c18e-tigera-ca-bundle\") pod \"calico-typha-7458994975-mhjm8\" (UID: \"12b107fc-8a0f-4cf8-bff0-f41a1bd0c18e\") " pod="calico-system/calico-typha-7458994975-mhjm8" Nov 23 23:01:50.510396 kubelet[2782]: I1123 23:01:50.510013 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/12b107fc-8a0f-4cf8-bff0-f41a1bd0c18e-typha-certs\") pod \"calico-typha-7458994975-mhjm8\" (UID: \"12b107fc-8a0f-4cf8-bff0-f41a1bd0c18e\") " pod="calico-system/calico-typha-7458994975-mhjm8" Nov 23 23:01:50.728935 systemd[1]: Created slice kubepods-besteffort-pod19ad9689_208e_455f_9ae2_1a0a9063031f.slice - libcontainer container kubepods-besteffort-pod19ad9689_208e_455f_9ae2_1a0a9063031f.slice. Nov 23 23:01:50.804440 containerd[1525]: time="2025-11-23T23:01:50.804043412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7458994975-mhjm8,Uid:12b107fc-8a0f-4cf8-bff0-f41a1bd0c18e,Namespace:calico-system,Attempt:0,}" Nov 23 23:01:50.812853 kubelet[2782]: I1123 23:01:50.812779 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/19ad9689-208e-455f-9ae2-1a0a9063031f-cni-net-dir\") pod \"calico-node-58wcp\" (UID: \"19ad9689-208e-455f-9ae2-1a0a9063031f\") " pod="calico-system/calico-node-58wcp" Nov 23 23:01:50.813382 kubelet[2782]: I1123 23:01:50.813202 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/19ad9689-208e-455f-9ae2-1a0a9063031f-policysync\") pod \"calico-node-58wcp\" (UID: \"19ad9689-208e-455f-9ae2-1a0a9063031f\") " pod="calico-system/calico-node-58wcp" Nov 23 23:01:50.813382 kubelet[2782]: I1123 23:01:50.813282 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/19ad9689-208e-455f-9ae2-1a0a9063031f-cni-bin-dir\") pod \"calico-node-58wcp\" (UID: \"19ad9689-208e-455f-9ae2-1a0a9063031f\") " pod="calico-system/calico-node-58wcp" Nov 23 23:01:50.813519 kubelet[2782]: I1123 23:01:50.813412 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/19ad9689-208e-455f-9ae2-1a0a9063031f-cni-log-dir\") pod \"calico-node-58wcp\" (UID: \"19ad9689-208e-455f-9ae2-1a0a9063031f\") " pod="calico-system/calico-node-58wcp" Nov 23 23:01:50.813519 kubelet[2782]: I1123 23:01:50.813488 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19ad9689-208e-455f-9ae2-1a0a9063031f-tigera-ca-bundle\") pod \"calico-node-58wcp\" (UID: \"19ad9689-208e-455f-9ae2-1a0a9063031f\") " pod="calico-system/calico-node-58wcp" Nov 23 23:01:50.813625 kubelet[2782]: I1123 23:01:50.813526 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/19ad9689-208e-455f-9ae2-1a0a9063031f-var-run-calico\") pod \"calico-node-58wcp\" (UID: \"19ad9689-208e-455f-9ae2-1a0a9063031f\") " pod="calico-system/calico-node-58wcp" Nov 23 23:01:50.813625 kubelet[2782]: I1123 23:01:50.813570 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfvvn\" (UniqueName: \"kubernetes.io/projected/19ad9689-208e-455f-9ae2-1a0a9063031f-kube-api-access-wfvvn\") pod \"calico-node-58wcp\" (UID: \"19ad9689-208e-455f-9ae2-1a0a9063031f\") " pod="calico-system/calico-node-58wcp" Nov 23 23:01:50.813625 kubelet[2782]: I1123 23:01:50.813609 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19ad9689-208e-455f-9ae2-1a0a9063031f-var-lib-calico\") pod \"calico-node-58wcp\" (UID: \"19ad9689-208e-455f-9ae2-1a0a9063031f\") " pod="calico-system/calico-node-58wcp" Nov 23 23:01:50.813752 kubelet[2782]: I1123 23:01:50.813648 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19ad9689-208e-455f-9ae2-1a0a9063031f-lib-modules\") pod \"calico-node-58wcp\" (UID: \"19ad9689-208e-455f-9ae2-1a0a9063031f\") " pod="calico-system/calico-node-58wcp" Nov 23 23:01:50.813752 kubelet[2782]: I1123 23:01:50.813684 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/19ad9689-208e-455f-9ae2-1a0a9063031f-node-certs\") pod \"calico-node-58wcp\" (UID: \"19ad9689-208e-455f-9ae2-1a0a9063031f\") " pod="calico-system/calico-node-58wcp" Nov 23 23:01:50.815266 kubelet[2782]: I1123 23:01:50.815206 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19ad9689-208e-455f-9ae2-1a0a9063031f-xtables-lock\") pod \"calico-node-58wcp\" (UID: \"19ad9689-208e-455f-9ae2-1a0a9063031f\") " pod="calico-system/calico-node-58wcp" Nov 23 23:01:50.815863 kubelet[2782]: I1123 23:01:50.815827 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/19ad9689-208e-455f-9ae2-1a0a9063031f-flexvol-driver-host\") pod \"calico-node-58wcp\" (UID: \"19ad9689-208e-455f-9ae2-1a0a9063031f\") " pod="calico-system/calico-node-58wcp" Nov 23 23:01:50.833683 containerd[1525]: time="2025-11-23T23:01:50.833560312Z" level=info msg="connecting to shim 4ed7bfbb25d5256a033ee65af5b2f62d2790170b51fff06612dda2adb7a9edd4" address="unix:///run/containerd/s/41fb57d006d7909cc9fddd732cf8b0beb02d2705a836790c26f3127479732009" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:50.863654 systemd[1]: Started cri-containerd-4ed7bfbb25d5256a033ee65af5b2f62d2790170b51fff06612dda2adb7a9edd4.scope - libcontainer container 4ed7bfbb25d5256a033ee65af5b2f62d2790170b51fff06612dda2adb7a9edd4. Nov 23 23:01:50.916081 kubelet[2782]: E1123 23:01:50.916025 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:01:50.927039 kubelet[2782]: E1123 23:01:50.927003 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:50.927039 kubelet[2782]: W1123 23:01:50.927030 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:50.927189 kubelet[2782]: E1123 23:01:50.927062 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:50.967836 kubelet[2782]: E1123 23:01:50.967802 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:50.967836 kubelet[2782]: W1123 23:01:50.967827 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:50.967836 kubelet[2782]: E1123 23:01:50.967848 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.002847 containerd[1525]: time="2025-11-23T23:01:51.002705087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7458994975-mhjm8,Uid:12b107fc-8a0f-4cf8-bff0-f41a1bd0c18e,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ed7bfbb25d5256a033ee65af5b2f62d2790170b51fff06612dda2adb7a9edd4\"" Nov 23 23:01:51.004999 kubelet[2782]: E1123 23:01:51.004887 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.004999 kubelet[2782]: W1123 23:01:51.004908 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.004999 kubelet[2782]: E1123 23:01:51.004932 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.007007 kubelet[2782]: E1123 23:01:51.006939 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.008096 kubelet[2782]: W1123 23:01:51.007388 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.008370 kubelet[2782]: E1123 23:01:51.008225 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.008535 kubelet[2782]: E1123 23:01:51.008521 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.009363 kubelet[2782]: W1123 23:01:51.008721 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.009363 kubelet[2782]: E1123 23:01:51.008740 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.009458 containerd[1525]: time="2025-11-23T23:01:51.008827415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 23:01:51.009905 kubelet[2782]: E1123 23:01:51.009776 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.009905 kubelet[2782]: W1123 23:01:51.009789 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.009905 kubelet[2782]: E1123 23:01:51.009802 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.010782 kubelet[2782]: E1123 23:01:51.010745 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.010782 kubelet[2782]: W1123 23:01:51.010758 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.010922 kubelet[2782]: E1123 23:01:51.010860 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.012353 kubelet[2782]: E1123 23:01:51.012239 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.012353 kubelet[2782]: W1123 23:01:51.012255 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.012353 kubelet[2782]: E1123 23:01:51.012267 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.012927 kubelet[2782]: E1123 23:01:51.012807 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.012927 kubelet[2782]: W1123 23:01:51.012822 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.012927 kubelet[2782]: E1123 23:01:51.012841 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.013581 kubelet[2782]: E1123 23:01:51.013499 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.013581 kubelet[2782]: W1123 23:01:51.013512 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.013581 kubelet[2782]: E1123 23:01:51.013523 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.014392 kubelet[2782]: E1123 23:01:51.014195 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.014392 kubelet[2782]: W1123 23:01:51.014211 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.014392 kubelet[2782]: E1123 23:01:51.014223 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.014670 kubelet[2782]: E1123 23:01:51.014634 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.014670 kubelet[2782]: W1123 23:01:51.014660 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.014670 kubelet[2782]: E1123 23:01:51.014672 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.015621 kubelet[2782]: E1123 23:01:51.015593 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.015621 kubelet[2782]: W1123 23:01:51.015613 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.015708 kubelet[2782]: E1123 23:01:51.015627 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.015865 kubelet[2782]: E1123 23:01:51.015836 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.015865 kubelet[2782]: W1123 23:01:51.015853 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.015865 kubelet[2782]: E1123 23:01:51.015863 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.016140 kubelet[2782]: E1123 23:01:51.016113 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.016140 kubelet[2782]: W1123 23:01:51.016129 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.016140 kubelet[2782]: E1123 23:01:51.016140 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.016529 kubelet[2782]: E1123 23:01:51.016508 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.016529 kubelet[2782]: W1123 23:01:51.016526 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.016611 kubelet[2782]: E1123 23:01:51.016540 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.017652 kubelet[2782]: E1123 23:01:51.017630 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.017652 kubelet[2782]: W1123 23:01:51.017646 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.017652 kubelet[2782]: E1123 23:01:51.017656 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.017828 kubelet[2782]: E1123 23:01:51.017812 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.017828 kubelet[2782]: W1123 23:01:51.017823 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.017908 kubelet[2782]: E1123 23:01:51.017832 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.018080 kubelet[2782]: E1123 23:01:51.018065 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.018080 kubelet[2782]: W1123 23:01:51.018078 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.018142 kubelet[2782]: E1123 23:01:51.018089 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.018230 kubelet[2782]: E1123 23:01:51.018220 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.018230 kubelet[2782]: W1123 23:01:51.018229 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.018297 kubelet[2782]: E1123 23:01:51.018237 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.018631 kubelet[2782]: E1123 23:01:51.018609 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.018631 kubelet[2782]: W1123 23:01:51.018623 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.018811 kubelet[2782]: E1123 23:01:51.018636 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.019609 kubelet[2782]: E1123 23:01:51.019506 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.019609 kubelet[2782]: W1123 23:01:51.019523 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.019609 kubelet[2782]: E1123 23:01:51.019536 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.020279 kubelet[2782]: E1123 23:01:51.019769 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.020279 kubelet[2782]: W1123 23:01:51.019777 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.020279 kubelet[2782]: E1123 23:01:51.019787 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.020279 kubelet[2782]: I1123 23:01:51.019813 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16769e22-23bd-4950-9cc0-72958bdfa903-kubelet-dir\") pod \"csi-node-driver-jnfrc\" (UID: \"16769e22-23bd-4950-9cc0-72958bdfa903\") " pod="calico-system/csi-node-driver-jnfrc" Nov 23 23:01:51.020279 kubelet[2782]: E1123 23:01:51.019945 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.020279 kubelet[2782]: W1123 23:01:51.019953 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.020279 kubelet[2782]: E1123 23:01:51.019975 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.020279 kubelet[2782]: I1123 23:01:51.019997 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf4f6\" (UniqueName: \"kubernetes.io/projected/16769e22-23bd-4950-9cc0-72958bdfa903-kube-api-access-gf4f6\") pod \"csi-node-driver-jnfrc\" (UID: \"16769e22-23bd-4950-9cc0-72958bdfa903\") " pod="calico-system/csi-node-driver-jnfrc" Nov 23 23:01:51.021555 kubelet[2782]: E1123 23:01:51.021209 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.021555 kubelet[2782]: W1123 23:01:51.021237 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.021555 kubelet[2782]: E1123 23:01:51.021262 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.022896 kubelet[2782]: E1123 23:01:51.022628 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.022896 kubelet[2782]: W1123 23:01:51.022659 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.022896 kubelet[2782]: E1123 23:01:51.022684 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.024478 kubelet[2782]: E1123 23:01:51.024458 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.024739 kubelet[2782]: W1123 23:01:51.024719 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.025215 kubelet[2782]: E1123 23:01:51.025081 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.025215 kubelet[2782]: I1123 23:01:51.025155 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/16769e22-23bd-4950-9cc0-72958bdfa903-socket-dir\") pod \"csi-node-driver-jnfrc\" (UID: \"16769e22-23bd-4950-9cc0-72958bdfa903\") " pod="calico-system/csi-node-driver-jnfrc" Nov 23 23:01:51.025602 kubelet[2782]: E1123 23:01:51.025534 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.025832 kubelet[2782]: W1123 23:01:51.025750 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.025832 kubelet[2782]: E1123 23:01:51.025775 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.026315 kubelet[2782]: E1123 23:01:51.026204 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.026315 kubelet[2782]: W1123 23:01:51.026285 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.026574 kubelet[2782]: E1123 23:01:51.026427 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.026853 kubelet[2782]: E1123 23:01:51.026839 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.026946 kubelet[2782]: W1123 23:01:51.026934 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.027040 kubelet[2782]: E1123 23:01:51.027028 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.027181 kubelet[2782]: I1123 23:01:51.027122 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/16769e22-23bd-4950-9cc0-72958bdfa903-varrun\") pod \"csi-node-driver-jnfrc\" (UID: \"16769e22-23bd-4950-9cc0-72958bdfa903\") " pod="calico-system/csi-node-driver-jnfrc" Nov 23 23:01:51.027387 kubelet[2782]: E1123 23:01:51.027323 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.027470 kubelet[2782]: W1123 23:01:51.027386 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.027470 kubelet[2782]: E1123 23:01:51.027402 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.027775 kubelet[2782]: E1123 23:01:51.027760 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.027775 kubelet[2782]: W1123 23:01:51.027774 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.027864 kubelet[2782]: E1123 23:01:51.027787 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.028444 kubelet[2782]: E1123 23:01:51.028424 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.028444 kubelet[2782]: W1123 23:01:51.028444 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.028525 kubelet[2782]: E1123 23:01:51.028457 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.028525 kubelet[2782]: I1123 23:01:51.028485 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/16769e22-23bd-4950-9cc0-72958bdfa903-registration-dir\") pod \"csi-node-driver-jnfrc\" (UID: \"16769e22-23bd-4950-9cc0-72958bdfa903\") " pod="calico-system/csi-node-driver-jnfrc" Nov 23 23:01:51.029310 kubelet[2782]: E1123 23:01:51.029274 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.029310 kubelet[2782]: W1123 23:01:51.029297 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.029310 kubelet[2782]: E1123 23:01:51.029312 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.031700 kubelet[2782]: E1123 23:01:51.031671 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.031700 kubelet[2782]: W1123 23:01:51.031696 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.031877 kubelet[2782]: E1123 23:01:51.031717 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.032014 kubelet[2782]: E1123 23:01:51.031997 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.032014 kubelet[2782]: W1123 23:01:51.032012 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.032115 kubelet[2782]: E1123 23:01:51.032025 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.033527 kubelet[2782]: E1123 23:01:51.033505 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.033527 kubelet[2782]: W1123 23:01:51.033525 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.033644 kubelet[2782]: E1123 23:01:51.033542 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.035900 containerd[1525]: time="2025-11-23T23:01:51.035866340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-58wcp,Uid:19ad9689-208e-455f-9ae2-1a0a9063031f,Namespace:calico-system,Attempt:0,}" Nov 23 23:01:51.063188 containerd[1525]: time="2025-11-23T23:01:51.062967781Z" level=info msg="connecting to shim 847bd93d27de0a45b6834d4e095481d41b22f008940a4bbd26c305b9a4b6f750" address="unix:///run/containerd/s/924a8d030d3f7e654e95206155fd726198a8e70d75aa0757f1c90b0e9d0a9905" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:51.099580 systemd[1]: Started cri-containerd-847bd93d27de0a45b6834d4e095481d41b22f008940a4bbd26c305b9a4b6f750.scope - libcontainer container 847bd93d27de0a45b6834d4e095481d41b22f008940a4bbd26c305b9a4b6f750. Nov 23 23:01:51.130411 kubelet[2782]: E1123 23:01:51.130228 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.130411 kubelet[2782]: W1123 23:01:51.130260 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.130411 kubelet[2782]: E1123 23:01:51.130284 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.130745 kubelet[2782]: E1123 23:01:51.130614 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.130745 kubelet[2782]: W1123 23:01:51.130637 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.130745 kubelet[2782]: E1123 23:01:51.130652 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.131373 kubelet[2782]: E1123 23:01:51.131319 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.131373 kubelet[2782]: W1123 23:01:51.131368 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.131672 kubelet[2782]: E1123 23:01:51.131384 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.131752 kubelet[2782]: E1123 23:01:51.131730 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.131752 kubelet[2782]: W1123 23:01:51.131747 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.131894 kubelet[2782]: E1123 23:01:51.131759 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.132394 kubelet[2782]: E1123 23:01:51.132370 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.132394 kubelet[2782]: W1123 23:01:51.132387 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.132711 kubelet[2782]: E1123 23:01:51.132403 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.132711 kubelet[2782]: E1123 23:01:51.132623 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.132711 kubelet[2782]: W1123 23:01:51.132631 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.132711 kubelet[2782]: E1123 23:01:51.132640 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.133127 kubelet[2782]: E1123 23:01:51.132803 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.133127 kubelet[2782]: W1123 23:01:51.132811 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.133127 kubelet[2782]: E1123 23:01:51.132820 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.133865 kubelet[2782]: E1123 23:01:51.133743 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.133865 kubelet[2782]: W1123 23:01:51.133766 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.134168 kubelet[2782]: E1123 23:01:51.134056 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.135097 kubelet[2782]: E1123 23:01:51.135074 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.135308 kubelet[2782]: W1123 23:01:51.135230 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.135308 kubelet[2782]: E1123 23:01:51.135264 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.136189 kubelet[2782]: E1123 23:01:51.136154 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.136404 kubelet[2782]: W1123 23:01:51.136288 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.136404 kubelet[2782]: E1123 23:01:51.136311 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.137021 kubelet[2782]: E1123 23:01:51.136995 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.137255 kubelet[2782]: W1123 23:01:51.137108 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.137255 kubelet[2782]: E1123 23:01:51.137133 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.138598 kubelet[2782]: E1123 23:01:51.138573 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.138907 kubelet[2782]: W1123 23:01:51.138882 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.139231 kubelet[2782]: E1123 23:01:51.139104 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.140346 containerd[1525]: time="2025-11-23T23:01:51.139802442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-58wcp,Uid:19ad9689-208e-455f-9ae2-1a0a9063031f,Namespace:calico-system,Attempt:0,} returns sandbox id \"847bd93d27de0a45b6834d4e095481d41b22f008940a4bbd26c305b9a4b6f750\"" Nov 23 23:01:51.140570 kubelet[2782]: E1123 23:01:51.140471 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.140817 kubelet[2782]: W1123 23:01:51.140735 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.140817 kubelet[2782]: E1123 23:01:51.140763 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.141700 kubelet[2782]: E1123 23:01:51.141680 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.141700 kubelet[2782]: W1123 23:01:51.141698 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.142258 kubelet[2782]: E1123 23:01:51.141712 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.142258 kubelet[2782]: E1123 23:01:51.142054 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.142258 kubelet[2782]: W1123 23:01:51.142084 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.142258 kubelet[2782]: E1123 23:01:51.142099 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.142258 kubelet[2782]: E1123 23:01:51.142246 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.142258 kubelet[2782]: W1123 23:01:51.142254 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.142258 kubelet[2782]: E1123 23:01:51.142265 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.144113 kubelet[2782]: E1123 23:01:51.144074 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.144113 kubelet[2782]: W1123 23:01:51.144095 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.144113 kubelet[2782]: E1123 23:01:51.144110 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.144640 kubelet[2782]: E1123 23:01:51.144620 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.144640 kubelet[2782]: W1123 23:01:51.144639 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.145013 kubelet[2782]: E1123 23:01:51.144654 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.145640 kubelet[2782]: E1123 23:01:51.145603 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.145640 kubelet[2782]: W1123 23:01:51.145627 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.145640 kubelet[2782]: E1123 23:01:51.145642 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.146592 kubelet[2782]: E1123 23:01:51.146563 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.146592 kubelet[2782]: W1123 23:01:51.146584 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.146690 kubelet[2782]: E1123 23:01:51.146599 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.147204 kubelet[2782]: E1123 23:01:51.147183 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.147204 kubelet[2782]: W1123 23:01:51.147201 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.147282 kubelet[2782]: E1123 23:01:51.147216 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.147545 kubelet[2782]: E1123 23:01:51.147432 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.147545 kubelet[2782]: W1123 23:01:51.147445 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.147545 kubelet[2782]: E1123 23:01:51.147454 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.147660 kubelet[2782]: E1123 23:01:51.147630 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.147660 kubelet[2782]: W1123 23:01:51.147639 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.147660 kubelet[2782]: E1123 23:01:51.147648 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.149437 kubelet[2782]: E1123 23:01:51.149236 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.149437 kubelet[2782]: W1123 23:01:51.149256 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.149437 kubelet[2782]: E1123 23:01:51.149269 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.150147 kubelet[2782]: E1123 23:01:51.150021 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.150147 kubelet[2782]: W1123 23:01:51.150041 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.150147 kubelet[2782]: E1123 23:01:51.150056 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:51.161238 kubelet[2782]: E1123 23:01:51.161160 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:51.161238 kubelet[2782]: W1123 23:01:51.161182 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:51.161238 kubelet[2782]: E1123 23:01:51.161201 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.489059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669534352.mount: Deactivated successfully. Nov 23 23:01:52.560925 kubelet[2782]: E1123 23:01:52.560858 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:01:52.976459 containerd[1525]: time="2025-11-23T23:01:52.975733659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:52.978221 containerd[1525]: time="2025-11-23T23:01:52.978032816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 23 23:01:52.981456 containerd[1525]: time="2025-11-23T23:01:52.980879344Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:52.987269 containerd[1525]: time="2025-11-23T23:01:52.987203766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:52.988422 containerd[1525]: time="2025-11-23T23:01:52.987555867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.978688854s" Nov 23 23:01:52.988422 containerd[1525]: time="2025-11-23T23:01:52.987592665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 23:01:52.990493 containerd[1525]: time="2025-11-23T23:01:52.990456352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 23:01:53.007296 containerd[1525]: time="2025-11-23T23:01:53.007240317Z" level=info msg="CreateContainer within sandbox \"4ed7bfbb25d5256a033ee65af5b2f62d2790170b51fff06612dda2adb7a9edd4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 23:01:53.026459 containerd[1525]: time="2025-11-23T23:01:53.026409127Z" level=info msg="Container 76c25ed4b7a60bf32f33d242dc879be4f4b752bc3e2c6852ce897af81a69be39: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:53.028553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936274212.mount: Deactivated successfully. Nov 23 23:01:53.036584 containerd[1525]: time="2025-11-23T23:01:53.036522145Z" level=info msg="CreateContainer within sandbox \"4ed7bfbb25d5256a033ee65af5b2f62d2790170b51fff06612dda2adb7a9edd4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"76c25ed4b7a60bf32f33d242dc879be4f4b752bc3e2c6852ce897af81a69be39\"" Nov 23 23:01:53.038460 containerd[1525]: time="2025-11-23T23:01:53.037450419Z" level=info msg="StartContainer for \"76c25ed4b7a60bf32f33d242dc879be4f4b752bc3e2c6852ce897af81a69be39\"" Nov 23 23:01:53.039256 containerd[1525]: time="2025-11-23T23:01:53.039183493Z" level=info msg="connecting to shim 76c25ed4b7a60bf32f33d242dc879be4f4b752bc3e2c6852ce897af81a69be39" address="unix:///run/containerd/s/41fb57d006d7909cc9fddd732cf8b0beb02d2705a836790c26f3127479732009" protocol=ttrpc version=3 Nov 23 23:01:53.080564 systemd[1]: Started cri-containerd-76c25ed4b7a60bf32f33d242dc879be4f4b752bc3e2c6852ce897af81a69be39.scope - libcontainer container 76c25ed4b7a60bf32f33d242dc879be4f4b752bc3e2c6852ce897af81a69be39. Nov 23 23:01:53.143354 containerd[1525]: time="2025-11-23T23:01:53.142598605Z" level=info msg="StartContainer for \"76c25ed4b7a60bf32f33d242dc879be4f4b752bc3e2c6852ce897af81a69be39\" returns successfully" Nov 23 23:01:53.744721 kubelet[2782]: E1123 23:01:53.744435 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.744721 kubelet[2782]: W1123 23:01:53.744476 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.744721 kubelet[2782]: E1123 23:01:53.744512 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.745914 kubelet[2782]: E1123 23:01:53.745163 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.746625 kubelet[2782]: W1123 23:01:53.745188 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.746625 kubelet[2782]: E1123 23:01:53.746078 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.746625 kubelet[2782]: E1123 23:01:53.746446 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.746625 kubelet[2782]: W1123 23:01:53.746464 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.746625 kubelet[2782]: E1123 23:01:53.746483 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.747367 kubelet[2782]: E1123 23:01:53.747116 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.747367 kubelet[2782]: W1123 23:01:53.747141 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.747367 kubelet[2782]: E1123 23:01:53.747209 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.748598 kubelet[2782]: E1123 23:01:53.748402 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.748598 kubelet[2782]: W1123 23:01:53.748424 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.748598 kubelet[2782]: E1123 23:01:53.748445 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.748919 kubelet[2782]: E1123 23:01:53.748897 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.749032 kubelet[2782]: W1123 23:01:53.749011 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.749152 kubelet[2782]: E1123 23:01:53.749131 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.749739 kubelet[2782]: E1123 23:01:53.749549 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.749739 kubelet[2782]: W1123 23:01:53.749570 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.749739 kubelet[2782]: E1123 23:01:53.749589 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.750070 kubelet[2782]: E1123 23:01:53.750048 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.750396 kubelet[2782]: W1123 23:01:53.750152 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.750396 kubelet[2782]: E1123 23:01:53.750178 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.750742 kubelet[2782]: E1123 23:01:53.750720 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.750935 kubelet[2782]: W1123 23:01:53.750911 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.751160 kubelet[2782]: E1123 23:01:53.751079 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.751697 kubelet[2782]: E1123 23:01:53.751661 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.752023 kubelet[2782]: W1123 23:01:53.751837 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.752023 kubelet[2782]: E1123 23:01:53.751868 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.752268 kubelet[2782]: E1123 23:01:53.752246 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.752582 kubelet[2782]: W1123 23:01:53.752368 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.752582 kubelet[2782]: E1123 23:01:53.752418 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.753028 kubelet[2782]: E1123 23:01:53.753011 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.753087 kubelet[2782]: W1123 23:01:53.753076 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.753300 kubelet[2782]: E1123 23:01:53.753186 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.753418 kubelet[2782]: E1123 23:01:53.753406 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.753482 kubelet[2782]: W1123 23:01:53.753471 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.753535 kubelet[2782]: E1123 23:01:53.753527 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.753722 kubelet[2782]: E1123 23:01:53.753710 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.753787 kubelet[2782]: W1123 23:01:53.753777 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.753867 kubelet[2782]: E1123 23:01:53.753851 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.754196 kubelet[2782]: E1123 23:01:53.754116 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.754196 kubelet[2782]: W1123 23:01:53.754128 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.754196 kubelet[2782]: E1123 23:01:53.754138 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.758503 kubelet[2782]: E1123 23:01:53.758479 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.758503 kubelet[2782]: W1123 23:01:53.758498 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.758744 kubelet[2782]: E1123 23:01:53.758513 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.758920 kubelet[2782]: E1123 23:01:53.758900 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.758920 kubelet[2782]: W1123 23:01:53.758918 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.759095 kubelet[2782]: E1123 23:01:53.758930 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.759137 kubelet[2782]: E1123 23:01:53.759118 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.759137 kubelet[2782]: W1123 23:01:53.759126 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.759137 kubelet[2782]: E1123 23:01:53.759134 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.759529 kubelet[2782]: E1123 23:01:53.759442 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.759529 kubelet[2782]: W1123 23:01:53.759458 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.759529 kubelet[2782]: E1123 23:01:53.759471 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.759891 kubelet[2782]: E1123 23:01:53.759783 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.759891 kubelet[2782]: W1123 23:01:53.759795 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.759891 kubelet[2782]: E1123 23:01:53.759841 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.760298 kubelet[2782]: E1123 23:01:53.760188 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.760298 kubelet[2782]: W1123 23:01:53.760202 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.760298 kubelet[2782]: E1123 23:01:53.760212 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.760891 kubelet[2782]: E1123 23:01:53.760700 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.760891 kubelet[2782]: W1123 23:01:53.760718 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.760891 kubelet[2782]: E1123 23:01:53.760736 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.761387 kubelet[2782]: E1123 23:01:53.761366 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.761530 kubelet[2782]: W1123 23:01:53.761456 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.761530 kubelet[2782]: E1123 23:01:53.761473 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.761946 kubelet[2782]: E1123 23:01:53.761869 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.761946 kubelet[2782]: W1123 23:01:53.761885 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.761946 kubelet[2782]: E1123 23:01:53.761898 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.762280 kubelet[2782]: E1123 23:01:53.762186 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.762280 kubelet[2782]: W1123 23:01:53.762199 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.762280 kubelet[2782]: E1123 23:01:53.762210 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.762719 kubelet[2782]: E1123 23:01:53.762638 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.762719 kubelet[2782]: W1123 23:01:53.762652 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.762719 kubelet[2782]: E1123 23:01:53.762666 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.763025 kubelet[2782]: E1123 23:01:53.763012 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.763194 kubelet[2782]: W1123 23:01:53.763085 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.763194 kubelet[2782]: E1123 23:01:53.763101 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.763573 kubelet[2782]: E1123 23:01:53.763463 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.763573 kubelet[2782]: W1123 23:01:53.763476 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.763573 kubelet[2782]: E1123 23:01:53.763488 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.763980 kubelet[2782]: E1123 23:01:53.763933 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.763980 kubelet[2782]: W1123 23:01:53.763952 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.763980 kubelet[2782]: E1123 23:01:53.763965 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.764372 kubelet[2782]: E1123 23:01:53.764269 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.764372 kubelet[2782]: W1123 23:01:53.764281 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.764372 kubelet[2782]: E1123 23:01:53.764292 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.764862 kubelet[2782]: E1123 23:01:53.764670 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.764862 kubelet[2782]: W1123 23:01:53.764684 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.764862 kubelet[2782]: E1123 23:01:53.764703 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.764992 kubelet[2782]: E1123 23:01:53.764969 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.764992 kubelet[2782]: W1123 23:01:53.764986 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.765055 kubelet[2782]: E1123 23:01:53.765000 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.765360 kubelet[2782]: E1123 23:01:53.765292 2782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.765360 kubelet[2782]: W1123 23:01:53.765311 2782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.765454 kubelet[2782]: E1123 23:01:53.765321 2782 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:54.372263 containerd[1525]: time="2025-11-23T23:01:54.372206743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:54.373567 containerd[1525]: time="2025-11-23T23:01:54.373389649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 23 23:01:54.374663 containerd[1525]: time="2025-11-23T23:01:54.374614673Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:54.379453 containerd[1525]: time="2025-11-23T23:01:54.379313737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:54.380614 containerd[1525]: time="2025-11-23T23:01:54.380567080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.389691511s" Nov 23 23:01:54.380798 containerd[1525]: time="2025-11-23T23:01:54.380620558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 23:01:54.386308 containerd[1525]: time="2025-11-23T23:01:54.386255579Z" level=info msg="CreateContainer within sandbox \"847bd93d27de0a45b6834d4e095481d41b22f008940a4bbd26c305b9a4b6f750\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 23:01:54.401379 containerd[1525]: time="2025-11-23T23:01:54.401225174Z" level=info msg="Container 74c59b848242e0cfe14bcd050b9c2651a506a538fd2702e5083d06a1d1ea4d4a: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:54.410932 containerd[1525]: time="2025-11-23T23:01:54.410876972Z" level=info msg="CreateContainer within sandbox \"847bd93d27de0a45b6834d4e095481d41b22f008940a4bbd26c305b9a4b6f750\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"74c59b848242e0cfe14bcd050b9c2651a506a538fd2702e5083d06a1d1ea4d4a\"" Nov 23 23:01:54.411512 containerd[1525]: time="2025-11-23T23:01:54.411491304Z" level=info msg="StartContainer for \"74c59b848242e0cfe14bcd050b9c2651a506a538fd2702e5083d06a1d1ea4d4a\"" Nov 23 23:01:54.415029 containerd[1525]: time="2025-11-23T23:01:54.414954505Z" level=info msg="connecting to shim 74c59b848242e0cfe14bcd050b9c2651a506a538fd2702e5083d06a1d1ea4d4a" address="unix:///run/containerd/s/924a8d030d3f7e654e95206155fd726198a8e70d75aa0757f1c90b0e9d0a9905" protocol=ttrpc version=3 Nov 23 23:01:54.436499 systemd[1]: Started cri-containerd-74c59b848242e0cfe14bcd050b9c2651a506a538fd2702e5083d06a1d1ea4d4a.scope - libcontainer container 74c59b848242e0cfe14bcd050b9c2651a506a538fd2702e5083d06a1d1ea4d4a. Nov 23 23:01:54.503720 containerd[1525]: time="2025-11-23T23:01:54.503677041Z" level=info msg="StartContainer for \"74c59b848242e0cfe14bcd050b9c2651a506a538fd2702e5083d06a1d1ea4d4a\" returns successfully" Nov 23 23:01:54.521005 systemd[1]: cri-containerd-74c59b848242e0cfe14bcd050b9c2651a506a538fd2702e5083d06a1d1ea4d4a.scope: Deactivated successfully. Nov 23 23:01:54.527527 containerd[1525]: time="2025-11-23T23:01:54.527488750Z" level=info msg="received container exit event container_id:\"74c59b848242e0cfe14bcd050b9c2651a506a538fd2702e5083d06a1d1ea4d4a\" id:\"74c59b848242e0cfe14bcd050b9c2651a506a538fd2702e5083d06a1d1ea4d4a\" pid:3459 exited_at:{seconds:1763938914 nanos:527121647}" Nov 23 23:01:54.552565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74c59b848242e0cfe14bcd050b9c2651a506a538fd2702e5083d06a1d1ea4d4a-rootfs.mount: Deactivated successfully. Nov 23 23:01:54.562014 kubelet[2782]: E1123 23:01:54.560801 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:01:54.683897 kubelet[2782]: I1123 23:01:54.683712 2782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:01:54.685702 containerd[1525]: time="2025-11-23T23:01:54.685651306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 23:01:54.711251 kubelet[2782]: I1123 23:01:54.711082 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7458994975-mhjm8" podStartSLOduration=2.729768913 podStartE2EDuration="4.711066341s" podCreationTimestamp="2025-11-23 23:01:50 +0000 UTC" firstStartedPulling="2025-11-23 23:01:51.007247866 +0000 UTC m=+27.589653650" lastFinishedPulling="2025-11-23 23:01:52.988545294 +0000 UTC m=+29.570951078" observedRunningTime="2025-11-23 23:01:53.696729289 +0000 UTC m=+30.279135073" watchObservedRunningTime="2025-11-23 23:01:54.711066341 +0000 UTC m=+31.293472125" Nov 23 23:01:56.561361 kubelet[2782]: E1123 23:01:56.560893 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:01:57.405524 containerd[1525]: time="2025-11-23T23:01:57.405441606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:57.407272 containerd[1525]: time="2025-11-23T23:01:57.407197265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 23:01:57.408288 containerd[1525]: time="2025-11-23T23:01:57.408240628Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:57.411529 containerd[1525]: time="2025-11-23T23:01:57.411477954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:57.412839 containerd[1525]: time="2025-11-23T23:01:57.412791268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.727104404s" Nov 23 23:01:57.413038 containerd[1525]: time="2025-11-23T23:01:57.413010860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 23:01:57.417952 containerd[1525]: time="2025-11-23T23:01:57.417866090Z" level=info msg="CreateContainer within sandbox \"847bd93d27de0a45b6834d4e095481d41b22f008940a4bbd26c305b9a4b6f750\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 23:01:57.432837 containerd[1525]: time="2025-11-23T23:01:57.431492890Z" level=info msg="Container 1e0f1c9a0b84277a1acf4eaa7a122444c8f9a4cc1940fd9ea8752d49a006ddb2: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:57.449361 containerd[1525]: time="2025-11-23T23:01:57.449205548Z" level=info msg="CreateContainer within sandbox \"847bd93d27de0a45b6834d4e095481d41b22f008940a4bbd26c305b9a4b6f750\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1e0f1c9a0b84277a1acf4eaa7a122444c8f9a4cc1940fd9ea8752d49a006ddb2\"" Nov 23 23:01:57.451153 containerd[1525]: time="2025-11-23T23:01:57.451089441Z" level=info msg="StartContainer for \"1e0f1c9a0b84277a1acf4eaa7a122444c8f9a4cc1940fd9ea8752d49a006ddb2\"" Nov 23 23:01:57.455765 containerd[1525]: time="2025-11-23T23:01:57.455615202Z" level=info msg="connecting to shim 1e0f1c9a0b84277a1acf4eaa7a122444c8f9a4cc1940fd9ea8752d49a006ddb2" address="unix:///run/containerd/s/924a8d030d3f7e654e95206155fd726198a8e70d75aa0757f1c90b0e9d0a9905" protocol=ttrpc version=3 Nov 23 23:01:57.482683 systemd[1]: Started cri-containerd-1e0f1c9a0b84277a1acf4eaa7a122444c8f9a4cc1940fd9ea8752d49a006ddb2.scope - libcontainer container 1e0f1c9a0b84277a1acf4eaa7a122444c8f9a4cc1940fd9ea8752d49a006ddb2. Nov 23 23:01:57.562609 containerd[1525]: time="2025-11-23T23:01:57.562561402Z" level=info msg="StartContainer for \"1e0f1c9a0b84277a1acf4eaa7a122444c8f9a4cc1940fd9ea8752d49a006ddb2\" returns successfully" Nov 23 23:01:58.076661 containerd[1525]: time="2025-11-23T23:01:58.076593420Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:01:58.079180 systemd[1]: cri-containerd-1e0f1c9a0b84277a1acf4eaa7a122444c8f9a4cc1940fd9ea8752d49a006ddb2.scope: Deactivated successfully. Nov 23 23:01:58.079471 systemd[1]: cri-containerd-1e0f1c9a0b84277a1acf4eaa7a122444c8f9a4cc1940fd9ea8752d49a006ddb2.scope: Consumed 496ms CPU time, 186.9M memory peak, 165.9M written to disk. Nov 23 23:01:58.083059 containerd[1525]: time="2025-11-23T23:01:58.083018216Z" level=info msg="received container exit event container_id:\"1e0f1c9a0b84277a1acf4eaa7a122444c8f9a4cc1940fd9ea8752d49a006ddb2\" id:\"1e0f1c9a0b84277a1acf4eaa7a122444c8f9a4cc1940fd9ea8752d49a006ddb2\" pid:3520 exited_at:{seconds:1763938918 nanos:82747944}" Nov 23 23:01:58.104813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e0f1c9a0b84277a1acf4eaa7a122444c8f9a4cc1940fd9ea8752d49a006ddb2-rootfs.mount: Deactivated successfully. Nov 23 23:01:58.114233 kubelet[2782]: I1123 23:01:58.114196 2782 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 23 23:01:58.193074 kubelet[2782]: I1123 23:01:58.193041 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64382cfa-ecd0-42e3-ae79-135db5ecbed0-config-volume\") pod \"coredns-674b8bbfcf-th5hd\" (UID: \"64382cfa-ecd0-42e3-ae79-135db5ecbed0\") " pod="kube-system/coredns-674b8bbfcf-th5hd" Nov 23 23:01:58.195603 kubelet[2782]: I1123 23:01:58.193077 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clkvg\" (UniqueName: \"kubernetes.io/projected/64382cfa-ecd0-42e3-ae79-135db5ecbed0-kube-api-access-clkvg\") pod \"coredns-674b8bbfcf-th5hd\" (UID: \"64382cfa-ecd0-42e3-ae79-135db5ecbed0\") " pod="kube-system/coredns-674b8bbfcf-th5hd" Nov 23 23:01:58.197029 systemd[1]: Created slice kubepods-burstable-pod64382cfa_ecd0_42e3_ae79_135db5ecbed0.slice - libcontainer container kubepods-burstable-pod64382cfa_ecd0_42e3_ae79_135db5ecbed0.slice. Nov 23 23:01:58.220053 systemd[1]: Created slice kubepods-burstable-pod4f684462_e515_4517_a943_3140e39f12d8.slice - libcontainer container kubepods-burstable-pod4f684462_e515_4517_a943_3140e39f12d8.slice. Nov 23 23:01:58.243122 systemd[1]: Created slice kubepods-besteffort-pod5a8e0f96_f5fe_436e_9782_031ed12b446f.slice - libcontainer container kubepods-besteffort-pod5a8e0f96_f5fe_436e_9782_031ed12b446f.slice. Nov 23 23:01:58.255578 systemd[1]: Created slice kubepods-besteffort-pod319eb40d_b16d_4daa_b6ab_a4e6de765a83.slice - libcontainer container kubepods-besteffort-pod319eb40d_b16d_4daa_b6ab_a4e6de765a83.slice. Nov 23 23:01:58.266110 systemd[1]: Created slice kubepods-besteffort-pod147a3fcd_da80_4b14_916a_786fd7363b2a.slice - libcontainer container kubepods-besteffort-pod147a3fcd_da80_4b14_916a_786fd7363b2a.slice. Nov 23 23:01:58.272925 systemd[1]: Created slice kubepods-besteffort-podc7c9f1f0_a20f_4cd1_87de_e2a910e5566a.slice - libcontainer container kubepods-besteffort-podc7c9f1f0_a20f_4cd1_87de_e2a910e5566a.slice. Nov 23 23:01:58.282219 systemd[1]: Created slice kubepods-besteffort-pod8d85c999_e9d4_4632_99f5_fa0f1c92756a.slice - libcontainer container kubepods-besteffort-pod8d85c999_e9d4_4632_99f5_fa0f1c92756a.slice. Nov 23 23:01:58.289299 systemd[1]: Created slice kubepods-besteffort-pod18619a94_547d_4f61_83ee_6c03707255a7.slice - libcontainer container kubepods-besteffort-pod18619a94_547d_4f61_83ee_6c03707255a7.slice. Nov 23 23:01:58.295447 kubelet[2782]: I1123 23:01:58.294311 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr9fc\" (UniqueName: \"kubernetes.io/projected/319eb40d-b16d-4daa-b6ab-a4e6de765a83-kube-api-access-pr9fc\") pod \"calico-apiserver-6bf6b75475-8n9bk\" (UID: \"319eb40d-b16d-4daa-b6ab-a4e6de765a83\") " pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" Nov 23 23:01:58.295636 kubelet[2782]: I1123 23:01:58.295425 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18619a94-547d-4f61-83ee-6c03707255a7-whisker-ca-bundle\") pod \"whisker-76f4dcdb96-9ltpn\" (UID: \"18619a94-547d-4f61-83ee-6c03707255a7\") " pod="calico-system/whisker-76f4dcdb96-9ltpn" Nov 23 23:01:58.295754 kubelet[2782]: I1123 23:01:58.295738 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrjlv\" (UniqueName: \"kubernetes.io/projected/18619a94-547d-4f61-83ee-6c03707255a7-kube-api-access-rrjlv\") pod \"whisker-76f4dcdb96-9ltpn\" (UID: \"18619a94-547d-4f61-83ee-6c03707255a7\") " pod="calico-system/whisker-76f4dcdb96-9ltpn" Nov 23 23:01:58.295889 kubelet[2782]: I1123 23:01:58.295837 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5kpz\" (UniqueName: \"kubernetes.io/projected/5a8e0f96-f5fe-436e-9782-031ed12b446f-kube-api-access-k5kpz\") pod \"calico-apiserver-747559d9d9-cwq4m\" (UID: \"5a8e0f96-f5fe-436e-9782-031ed12b446f\") " pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" Nov 23 23:01:58.295889 kubelet[2782]: I1123 23:01:58.295866 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64ddx\" (UniqueName: \"kubernetes.io/projected/c7c9f1f0-a20f-4cd1-87de-e2a910e5566a-kube-api-access-64ddx\") pod \"calico-kube-controllers-74998f44b6-zvwmg\" (UID: \"c7c9f1f0-a20f-4cd1-87de-e2a910e5566a\") " pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" Nov 23 23:01:58.295994 kubelet[2782]: I1123 23:01:58.295982 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/147a3fcd-da80-4b14-916a-786fd7363b2a-config\") pod \"goldmane-666569f655-qzvtm\" (UID: \"147a3fcd-da80-4b14-916a-786fd7363b2a\") " pod="calico-system/goldmane-666569f655-qzvtm" Nov 23 23:01:58.296072 kubelet[2782]: I1123 23:01:58.296060 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f684462-e515-4517-a943-3140e39f12d8-config-volume\") pod \"coredns-674b8bbfcf-dp9vd\" (UID: \"4f684462-e515-4517-a943-3140e39f12d8\") " pod="kube-system/coredns-674b8bbfcf-dp9vd" Nov 23 23:01:58.296205 kubelet[2782]: I1123 23:01:58.296145 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/147a3fcd-da80-4b14-916a-786fd7363b2a-goldmane-ca-bundle\") pod \"goldmane-666569f655-qzvtm\" (UID: \"147a3fcd-da80-4b14-916a-786fd7363b2a\") " pod="calico-system/goldmane-666569f655-qzvtm" Nov 23 23:01:58.296205 kubelet[2782]: I1123 23:01:58.296165 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/147a3fcd-da80-4b14-916a-786fd7363b2a-goldmane-key-pair\") pod \"goldmane-666569f655-qzvtm\" (UID: \"147a3fcd-da80-4b14-916a-786fd7363b2a\") " pod="calico-system/goldmane-666569f655-qzvtm" Nov 23 23:01:58.296205 kubelet[2782]: I1123 23:01:58.296180 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsmnf\" (UniqueName: \"kubernetes.io/projected/147a3fcd-da80-4b14-916a-786fd7363b2a-kube-api-access-jsmnf\") pod \"goldmane-666569f655-qzvtm\" (UID: \"147a3fcd-da80-4b14-916a-786fd7363b2a\") " pod="calico-system/goldmane-666569f655-qzvtm" Nov 23 23:01:58.296318 kubelet[2782]: I1123 23:01:58.296303 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7c9f1f0-a20f-4cd1-87de-e2a910e5566a-tigera-ca-bundle\") pod \"calico-kube-controllers-74998f44b6-zvwmg\" (UID: \"c7c9f1f0-a20f-4cd1-87de-e2a910e5566a\") " pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" Nov 23 23:01:58.296435 kubelet[2782]: I1123 23:01:58.296415 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5rt\" (UniqueName: \"kubernetes.io/projected/8d85c999-e9d4-4632-99f5-fa0f1c92756a-kube-api-access-lc5rt\") pod \"calico-apiserver-747559d9d9-fscb7\" (UID: \"8d85c999-e9d4-4632-99f5-fa0f1c92756a\") " pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" Nov 23 23:01:58.296580 kubelet[2782]: I1123 23:01:58.296564 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5a8e0f96-f5fe-436e-9782-031ed12b446f-calico-apiserver-certs\") pod \"calico-apiserver-747559d9d9-cwq4m\" (UID: \"5a8e0f96-f5fe-436e-9782-031ed12b446f\") " pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" Nov 23 23:01:58.296721 kubelet[2782]: I1123 23:01:58.296704 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/319eb40d-b16d-4daa-b6ab-a4e6de765a83-calico-apiserver-certs\") pod \"calico-apiserver-6bf6b75475-8n9bk\" (UID: \"319eb40d-b16d-4daa-b6ab-a4e6de765a83\") " pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" Nov 23 23:01:58.296831 kubelet[2782]: I1123 23:01:58.296819 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18619a94-547d-4f61-83ee-6c03707255a7-whisker-backend-key-pair\") pod \"whisker-76f4dcdb96-9ltpn\" (UID: \"18619a94-547d-4f61-83ee-6c03707255a7\") " pod="calico-system/whisker-76f4dcdb96-9ltpn" Nov 23 23:01:58.296957 kubelet[2782]: I1123 23:01:58.296908 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8d85c999-e9d4-4632-99f5-fa0f1c92756a-calico-apiserver-certs\") pod \"calico-apiserver-747559d9d9-fscb7\" (UID: \"8d85c999-e9d4-4632-99f5-fa0f1c92756a\") " pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" Nov 23 23:01:58.296957 kubelet[2782]: I1123 23:01:58.296931 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4k9q\" (UniqueName: \"kubernetes.io/projected/4f684462-e515-4517-a943-3140e39f12d8-kube-api-access-x4k9q\") pod \"coredns-674b8bbfcf-dp9vd\" (UID: \"4f684462-e515-4517-a943-3140e39f12d8\") " pod="kube-system/coredns-674b8bbfcf-dp9vd" Nov 23 23:01:58.515664 containerd[1525]: time="2025-11-23T23:01:58.515618204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-th5hd,Uid:64382cfa-ecd0-42e3-ae79-135db5ecbed0,Namespace:kube-system,Attempt:0,}" Nov 23 23:01:58.540525 containerd[1525]: time="2025-11-23T23:01:58.540169222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dp9vd,Uid:4f684462-e515-4517-a943-3140e39f12d8,Namespace:kube-system,Attempt:0,}" Nov 23 23:01:58.551568 containerd[1525]: time="2025-11-23T23:01:58.551529380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747559d9d9-cwq4m,Uid:5a8e0f96-f5fe-436e-9782-031ed12b446f,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:01:58.567870 containerd[1525]: time="2025-11-23T23:01:58.567277279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf6b75475-8n9bk,Uid:319eb40d-b16d-4daa-b6ab-a4e6de765a83,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:01:58.570874 containerd[1525]: time="2025-11-23T23:01:58.570823086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qzvtm,Uid:147a3fcd-da80-4b14-916a-786fd7363b2a,Namespace:calico-system,Attempt:0,}" Nov 23 23:01:58.571174 systemd[1]: Created slice kubepods-besteffort-pod16769e22_23bd_4950_9cc0_72958bdfa903.slice - libcontainer container kubepods-besteffort-pod16769e22_23bd_4950_9cc0_72958bdfa903.slice. Nov 23 23:01:58.577679 containerd[1525]: time="2025-11-23T23:01:58.577614430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnfrc,Uid:16769e22-23bd-4950-9cc0-72958bdfa903,Namespace:calico-system,Attempt:0,}" Nov 23 23:01:58.580567 containerd[1525]: time="2025-11-23T23:01:58.580530097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74998f44b6-zvwmg,Uid:c7c9f1f0-a20f-4cd1-87de-e2a910e5566a,Namespace:calico-system,Attempt:0,}" Nov 23 23:01:58.590777 containerd[1525]: time="2025-11-23T23:01:58.590500540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747559d9d9-fscb7,Uid:8d85c999-e9d4-4632-99f5-fa0f1c92756a,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:01:58.595807 containerd[1525]: time="2025-11-23T23:01:58.595770452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76f4dcdb96-9ltpn,Uid:18619a94-547d-4f61-83ee-6c03707255a7,Namespace:calico-system,Attempt:0,}" Nov 23 23:01:58.720906 containerd[1525]: time="2025-11-23T23:01:58.720856830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 23:01:58.766712 containerd[1525]: time="2025-11-23T23:01:58.766562895Z" level=error msg="Failed to destroy network for sandbox \"64bd31bdadd0324529567e652d615aee6dc04706ecb3d571cf2b427b5fbf06c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.770044 containerd[1525]: time="2025-11-23T23:01:58.769974386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf6b75475-8n9bk,Uid:319eb40d-b16d-4daa-b6ab-a4e6de765a83,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64bd31bdadd0324529567e652d615aee6dc04706ecb3d571cf2b427b5fbf06c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.774451 kubelet[2782]: E1123 23:01:58.774384 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64bd31bdadd0324529567e652d615aee6dc04706ecb3d571cf2b427b5fbf06c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.774600 kubelet[2782]: E1123 23:01:58.774469 2782 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64bd31bdadd0324529567e652d615aee6dc04706ecb3d571cf2b427b5fbf06c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" Nov 23 23:01:58.774600 kubelet[2782]: E1123 23:01:58.774505 2782 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64bd31bdadd0324529567e652d615aee6dc04706ecb3d571cf2b427b5fbf06c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" Nov 23 23:01:58.774666 kubelet[2782]: E1123 23:01:58.774565 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6bf6b75475-8n9bk_calico-apiserver(319eb40d-b16d-4daa-b6ab-a4e6de765a83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6bf6b75475-8n9bk_calico-apiserver(319eb40d-b16d-4daa-b6ab-a4e6de765a83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64bd31bdadd0324529567e652d615aee6dc04706ecb3d571cf2b427b5fbf06c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:01:58.819983 containerd[1525]: time="2025-11-23T23:01:58.819866238Z" level=error msg="Failed to destroy network for sandbox \"62c6619b222ab8414b79e34d7ce96296af2700ab7b6e1bc4509368dc52799631\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.823831 containerd[1525]: time="2025-11-23T23:01:58.823688756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747559d9d9-cwq4m,Uid:5a8e0f96-f5fe-436e-9782-031ed12b446f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62c6619b222ab8414b79e34d7ce96296af2700ab7b6e1bc4509368dc52799631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.823999 kubelet[2782]: E1123 23:01:58.823958 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62c6619b222ab8414b79e34d7ce96296af2700ab7b6e1bc4509368dc52799631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.824040 kubelet[2782]: E1123 23:01:58.824017 2782 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62c6619b222ab8414b79e34d7ce96296af2700ab7b6e1bc4509368dc52799631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" Nov 23 23:01:58.824065 kubelet[2782]: E1123 23:01:58.824040 2782 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62c6619b222ab8414b79e34d7ce96296af2700ab7b6e1bc4509368dc52799631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" Nov 23 23:01:58.824175 kubelet[2782]: E1123 23:01:58.824096 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-747559d9d9-cwq4m_calico-apiserver(5a8e0f96-f5fe-436e-9782-031ed12b446f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-747559d9d9-cwq4m_calico-apiserver(5a8e0f96-f5fe-436e-9782-031ed12b446f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62c6619b222ab8414b79e34d7ce96296af2700ab7b6e1bc4509368dc52799631\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:01:58.833644 containerd[1525]: time="2025-11-23T23:01:58.833467925Z" level=error msg="Failed to destroy network for sandbox \"229da598fa591b6ed0aa7e776ae8de83388ab528453a966ab7b3becfad809b51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.835542 containerd[1525]: time="2025-11-23T23:01:58.835503340Z" level=error msg="Failed to destroy network for sandbox \"905c4fdb0878d6d2aea62d4a2c6c5edaad06c5e9bc9065bb6e07d4d923a18075\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.838063 containerd[1525]: time="2025-11-23T23:01:58.837857785Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-th5hd,Uid:64382cfa-ecd0-42e3-ae79-135db5ecbed0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"229da598fa591b6ed0aa7e776ae8de83388ab528453a966ab7b3becfad809b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.838792 kubelet[2782]: E1123 23:01:58.838506 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"229da598fa591b6ed0aa7e776ae8de83388ab528453a966ab7b3becfad809b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.838792 kubelet[2782]: E1123 23:01:58.838589 2782 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"229da598fa591b6ed0aa7e776ae8de83388ab528453a966ab7b3becfad809b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-th5hd" Nov 23 23:01:58.838792 kubelet[2782]: E1123 23:01:58.838612 2782 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"229da598fa591b6ed0aa7e776ae8de83388ab528453a966ab7b3becfad809b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-th5hd" Nov 23 23:01:58.840005 kubelet[2782]: E1123 23:01:58.839817 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-th5hd_kube-system(64382cfa-ecd0-42e3-ae79-135db5ecbed0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-th5hd_kube-system(64382cfa-ecd0-42e3-ae79-135db5ecbed0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"229da598fa591b6ed0aa7e776ae8de83388ab528453a966ab7b3becfad809b51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-th5hd" podUID="64382cfa-ecd0-42e3-ae79-135db5ecbed0" Nov 23 23:01:58.840884 containerd[1525]: time="2025-11-23T23:01:58.840751573Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dp9vd,Uid:4f684462-e515-4517-a943-3140e39f12d8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"905c4fdb0878d6d2aea62d4a2c6c5edaad06c5e9bc9065bb6e07d4d923a18075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.841412 kubelet[2782]: E1123 23:01:58.841273 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"905c4fdb0878d6d2aea62d4a2c6c5edaad06c5e9bc9065bb6e07d4d923a18075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.841412 kubelet[2782]: E1123 23:01:58.841319 2782 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"905c4fdb0878d6d2aea62d4a2c6c5edaad06c5e9bc9065bb6e07d4d923a18075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dp9vd" Nov 23 23:01:58.841412 kubelet[2782]: E1123 23:01:58.841353 2782 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"905c4fdb0878d6d2aea62d4a2c6c5edaad06c5e9bc9065bb6e07d4d923a18075\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dp9vd" Nov 23 23:01:58.841538 kubelet[2782]: E1123 23:01:58.841392 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dp9vd_kube-system(4f684462-e515-4517-a943-3140e39f12d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dp9vd_kube-system(4f684462-e515-4517-a943-3140e39f12d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"905c4fdb0878d6d2aea62d4a2c6c5edaad06c5e9bc9065bb6e07d4d923a18075\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dp9vd" podUID="4f684462-e515-4517-a943-3140e39f12d8" Nov 23 23:01:58.862784 containerd[1525]: time="2025-11-23T23:01:58.862710394Z" level=error msg="Failed to destroy network for sandbox \"6b7c5c0a4e7643367cb06eef9734c659026ee1717bb215c2dd0c7135f60b9973\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.864528 containerd[1525]: time="2025-11-23T23:01:58.864364421Z" level=error msg="Failed to destroy network for sandbox \"561ad565c0f82342755e729364d75c318e6f7390afbfee134ed2c43ba76648c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.865678 containerd[1525]: time="2025-11-23T23:01:58.865621501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qzvtm,Uid:147a3fcd-da80-4b14-916a-786fd7363b2a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b7c5c0a4e7643367cb06eef9734c659026ee1717bb215c2dd0c7135f60b9973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.865894 kubelet[2782]: E1123 23:01:58.865848 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b7c5c0a4e7643367cb06eef9734c659026ee1717bb215c2dd0c7135f60b9973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.865942 kubelet[2782]: E1123 23:01:58.865898 2782 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b7c5c0a4e7643367cb06eef9734c659026ee1717bb215c2dd0c7135f60b9973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qzvtm" Nov 23 23:01:58.865942 kubelet[2782]: E1123 23:01:58.865921 2782 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b7c5c0a4e7643367cb06eef9734c659026ee1717bb215c2dd0c7135f60b9973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qzvtm" Nov 23 23:01:58.865989 kubelet[2782]: E1123 23:01:58.865969 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-qzvtm_calico-system(147a3fcd-da80-4b14-916a-786fd7363b2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-qzvtm_calico-system(147a3fcd-da80-4b14-916a-786fd7363b2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b7c5c0a4e7643367cb06eef9734c659026ee1717bb215c2dd0c7135f60b9973\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:01:58.869104 containerd[1525]: time="2025-11-23T23:01:58.868890677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76f4dcdb96-9ltpn,Uid:18619a94-547d-4f61-83ee-6c03707255a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"561ad565c0f82342755e729364d75c318e6f7390afbfee134ed2c43ba76648c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.870349 kubelet[2782]: E1123 23:01:58.869756 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561ad565c0f82342755e729364d75c318e6f7390afbfee134ed2c43ba76648c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.870349 kubelet[2782]: E1123 23:01:58.869824 2782 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561ad565c0f82342755e729364d75c318e6f7390afbfee134ed2c43ba76648c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76f4dcdb96-9ltpn" Nov 23 23:01:58.870349 kubelet[2782]: E1123 23:01:58.869844 2782 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561ad565c0f82342755e729364d75c318e6f7390afbfee134ed2c43ba76648c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-76f4dcdb96-9ltpn" Nov 23 23:01:58.870506 kubelet[2782]: E1123 23:01:58.869892 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-76f4dcdb96-9ltpn_calico-system(18619a94-547d-4f61-83ee-6c03707255a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-76f4dcdb96-9ltpn_calico-system(18619a94-547d-4f61-83ee-6c03707255a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"561ad565c0f82342755e729364d75c318e6f7390afbfee134ed2c43ba76648c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-76f4dcdb96-9ltpn" podUID="18619a94-547d-4f61-83ee-6c03707255a7" Nov 23 23:01:58.883163 containerd[1525]: time="2025-11-23T23:01:58.883105144Z" level=error msg="Failed to destroy network for sandbox \"9bbf4c0dc5886fecb4df330e1e101b83e84f7e3923f43e779a454bab5dbb4efd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.885599 containerd[1525]: time="2025-11-23T23:01:58.885536907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747559d9d9-fscb7,Uid:8d85c999-e9d4-4632-99f5-fa0f1c92756a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bbf4c0dc5886fecb4df330e1e101b83e84f7e3923f43e779a454bab5dbb4efd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.886229 kubelet[2782]: E1123 23:01:58.886167 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bbf4c0dc5886fecb4df330e1e101b83e84f7e3923f43e779a454bab5dbb4efd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.886311 kubelet[2782]: E1123 23:01:58.886269 2782 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bbf4c0dc5886fecb4df330e1e101b83e84f7e3923f43e779a454bab5dbb4efd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" Nov 23 23:01:58.886311 kubelet[2782]: E1123 23:01:58.886292 2782 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bbf4c0dc5886fecb4df330e1e101b83e84f7e3923f43e779a454bab5dbb4efd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" Nov 23 23:01:58.887415 kubelet[2782]: E1123 23:01:58.887367 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-747559d9d9-fscb7_calico-apiserver(8d85c999-e9d4-4632-99f5-fa0f1c92756a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-747559d9d9-fscb7_calico-apiserver(8d85c999-e9d4-4632-99f5-fa0f1c92756a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bbf4c0dc5886fecb4df330e1e101b83e84f7e3923f43e779a454bab5dbb4efd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:01:58.890815 containerd[1525]: time="2025-11-23T23:01:58.890773220Z" level=error msg="Failed to destroy network for sandbox \"2467b880228d3a47437ec508a8d4a7bbb381930ebde603be14ed0c8f6d392247\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.893832 containerd[1525]: time="2025-11-23T23:01:58.893701567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnfrc,Uid:16769e22-23bd-4950-9cc0-72958bdfa903,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2467b880228d3a47437ec508a8d4a7bbb381930ebde603be14ed0c8f6d392247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.894127 kubelet[2782]: E1123 23:01:58.893992 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2467b880228d3a47437ec508a8d4a7bbb381930ebde603be14ed0c8f6d392247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.894127 kubelet[2782]: E1123 23:01:58.894056 2782 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2467b880228d3a47437ec508a8d4a7bbb381930ebde603be14ed0c8f6d392247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnfrc" Nov 23 23:01:58.894127 kubelet[2782]: E1123 23:01:58.894074 2782 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2467b880228d3a47437ec508a8d4a7bbb381930ebde603be14ed0c8f6d392247\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnfrc" Nov 23 23:01:58.894230 containerd[1525]: time="2025-11-23T23:01:58.894021477Z" level=error msg="Failed to destroy network for sandbox \"520e91ca5cc5ae418e5e79245ade3f75b5da0b588c6f7de8c7338537647a0128\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.894256 kubelet[2782]: E1123 23:01:58.894127 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jnfrc_calico-system(16769e22-23bd-4950-9cc0-72958bdfa903)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jnfrc_calico-system(16769e22-23bd-4950-9cc0-72958bdfa903)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2467b880228d3a47437ec508a8d4a7bbb381930ebde603be14ed0c8f6d392247\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:01:58.896950 containerd[1525]: time="2025-11-23T23:01:58.896691192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74998f44b6-zvwmg,Uid:c7c9f1f0-a20f-4cd1-87de-e2a910e5566a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"520e91ca5cc5ae418e5e79245ade3f75b5da0b588c6f7de8c7338537647a0128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.897056 kubelet[2782]: E1123 23:01:58.896946 2782 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520e91ca5cc5ae418e5e79245ade3f75b5da0b588c6f7de8c7338537647a0128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.897056 kubelet[2782]: E1123 23:01:58.897009 2782 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520e91ca5cc5ae418e5e79245ade3f75b5da0b588c6f7de8c7338537647a0128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" Nov 23 23:01:58.897056 kubelet[2782]: E1123 23:01:58.897032 2782 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"520e91ca5cc5ae418e5e79245ade3f75b5da0b588c6f7de8c7338537647a0128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" Nov 23 23:01:58.897159 kubelet[2782]: E1123 23:01:58.897106 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74998f44b6-zvwmg_calico-system(c7c9f1f0-a20f-4cd1-87de-e2a910e5566a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74998f44b6-zvwmg_calico-system(c7c9f1f0-a20f-4cd1-87de-e2a910e5566a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"520e91ca5cc5ae418e5e79245ade3f75b5da0b588c6f7de8c7338537647a0128\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:02:05.527385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348423391.mount: Deactivated successfully. Nov 23 23:02:05.568149 containerd[1525]: time="2025-11-23T23:02:05.568089999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:05.569181 containerd[1525]: time="2025-11-23T23:02:05.569017068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 23:02:05.570042 containerd[1525]: time="2025-11-23T23:02:05.570000217Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:05.572352 containerd[1525]: time="2025-11-23T23:02:05.572291831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:05.573359 containerd[1525]: time="2025-11-23T23:02:05.573074823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.852167835s" Nov 23 23:02:05.573359 containerd[1525]: time="2025-11-23T23:02:05.573106782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 23:02:05.595968 containerd[1525]: time="2025-11-23T23:02:05.595885205Z" level=info msg="CreateContainer within sandbox \"847bd93d27de0a45b6834d4e095481d41b22f008940a4bbd26c305b9a4b6f750\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 23:02:05.607786 containerd[1525]: time="2025-11-23T23:02:05.607739512Z" level=info msg="Container 1d9554cbd1b81d65a5c5559c465cb9bee8c783b8de58b4f92d7602df2bb1be7b: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:05.615592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537415362.mount: Deactivated successfully. Nov 23 23:02:05.628555 containerd[1525]: time="2025-11-23T23:02:05.628487958Z" level=info msg="CreateContainer within sandbox \"847bd93d27de0a45b6834d4e095481d41b22f008940a4bbd26c305b9a4b6f750\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1d9554cbd1b81d65a5c5559c465cb9bee8c783b8de58b4f92d7602df2bb1be7b\"" Nov 23 23:02:05.629347 containerd[1525]: time="2025-11-23T23:02:05.629295908Z" level=info msg="StartContainer for \"1d9554cbd1b81d65a5c5559c465cb9bee8c783b8de58b4f92d7602df2bb1be7b\"" Nov 23 23:02:05.631500 containerd[1525]: time="2025-11-23T23:02:05.631462444Z" level=info msg="connecting to shim 1d9554cbd1b81d65a5c5559c465cb9bee8c783b8de58b4f92d7602df2bb1be7b" address="unix:///run/containerd/s/924a8d030d3f7e654e95206155fd726198a8e70d75aa0757f1c90b0e9d0a9905" protocol=ttrpc version=3 Nov 23 23:02:05.682847 systemd[1]: Started cri-containerd-1d9554cbd1b81d65a5c5559c465cb9bee8c783b8de58b4f92d7602df2bb1be7b.scope - libcontainer container 1d9554cbd1b81d65a5c5559c465cb9bee8c783b8de58b4f92d7602df2bb1be7b. Nov 23 23:02:05.765022 containerd[1525]: time="2025-11-23T23:02:05.764977458Z" level=info msg="StartContainer for \"1d9554cbd1b81d65a5c5559c465cb9bee8c783b8de58b4f92d7602df2bb1be7b\" returns successfully" Nov 23 23:02:05.917552 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 23:02:05.917746 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 23:02:06.158029 kubelet[2782]: I1123 23:02:06.157307 2782 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18619a94-547d-4f61-83ee-6c03707255a7-whisker-ca-bundle\") pod \"18619a94-547d-4f61-83ee-6c03707255a7\" (UID: \"18619a94-547d-4f61-83ee-6c03707255a7\") " Nov 23 23:02:06.158029 kubelet[2782]: I1123 23:02:06.157465 2782 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18619a94-547d-4f61-83ee-6c03707255a7-whisker-backend-key-pair\") pod \"18619a94-547d-4f61-83ee-6c03707255a7\" (UID: \"18619a94-547d-4f61-83ee-6c03707255a7\") " Nov 23 23:02:06.158029 kubelet[2782]: I1123 23:02:06.157557 2782 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrjlv\" (UniqueName: \"kubernetes.io/projected/18619a94-547d-4f61-83ee-6c03707255a7-kube-api-access-rrjlv\") pod \"18619a94-547d-4f61-83ee-6c03707255a7\" (UID: \"18619a94-547d-4f61-83ee-6c03707255a7\") " Nov 23 23:02:06.176348 kubelet[2782]: I1123 23:02:06.176258 2782 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18619a94-547d-4f61-83ee-6c03707255a7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "18619a94-547d-4f61-83ee-6c03707255a7" (UID: "18619a94-547d-4f61-83ee-6c03707255a7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 23:02:06.177759 kubelet[2782]: I1123 23:02:06.177602 2782 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18619a94-547d-4f61-83ee-6c03707255a7-kube-api-access-rrjlv" (OuterVolumeSpecName: "kube-api-access-rrjlv") pod "18619a94-547d-4f61-83ee-6c03707255a7" (UID: "18619a94-547d-4f61-83ee-6c03707255a7"). InnerVolumeSpecName "kube-api-access-rrjlv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 23:02:06.177759 kubelet[2782]: I1123 23:02:06.177701 2782 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18619a94-547d-4f61-83ee-6c03707255a7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "18619a94-547d-4f61-83ee-6c03707255a7" (UID: "18619a94-547d-4f61-83ee-6c03707255a7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 23:02:06.258537 kubelet[2782]: I1123 23:02:06.258459 2782 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18619a94-547d-4f61-83ee-6c03707255a7-whisker-ca-bundle\") on node \"ci-4459-2-1-9-52b78fad11\" DevicePath \"\"" Nov 23 23:02:06.258537 kubelet[2782]: I1123 23:02:06.258517 2782 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18619a94-547d-4f61-83ee-6c03707255a7-whisker-backend-key-pair\") on node \"ci-4459-2-1-9-52b78fad11\" DevicePath \"\"" Nov 23 23:02:06.258537 kubelet[2782]: I1123 23:02:06.258541 2782 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rrjlv\" (UniqueName: \"kubernetes.io/projected/18619a94-547d-4f61-83ee-6c03707255a7-kube-api-access-rrjlv\") on node \"ci-4459-2-1-9-52b78fad11\" DevicePath \"\"" Nov 23 23:02:06.528199 systemd[1]: var-lib-kubelet-pods-18619a94\x2d547d\x2d4f61\x2d83ee\x2d6c03707255a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drrjlv.mount: Deactivated successfully. Nov 23 23:02:06.528320 systemd[1]: var-lib-kubelet-pods-18619a94\x2d547d\x2d4f61\x2d83ee\x2d6c03707255a7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 23:02:06.769432 systemd[1]: Removed slice kubepods-besteffort-pod18619a94_547d_4f61_83ee_6c03707255a7.slice - libcontainer container kubepods-besteffort-pod18619a94_547d_4f61_83ee_6c03707255a7.slice. Nov 23 23:02:06.791781 kubelet[2782]: I1123 23:02:06.790957 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-58wcp" podStartSLOduration=2.359962451 podStartE2EDuration="16.790938609s" podCreationTimestamp="2025-11-23 23:01:50 +0000 UTC" firstStartedPulling="2025-11-23 23:01:51.143091253 +0000 UTC m=+27.725497037" lastFinishedPulling="2025-11-23 23:02:05.574067451 +0000 UTC m=+42.156473195" observedRunningTime="2025-11-23 23:02:06.790584772 +0000 UTC m=+43.372990556" watchObservedRunningTime="2025-11-23 23:02:06.790938609 +0000 UTC m=+43.373344393" Nov 23 23:02:06.881031 systemd[1]: Created slice kubepods-besteffort-pod186dba02_b9bc_46ba_b1f6_48d2be5bbd68.slice - libcontainer container kubepods-besteffort-pod186dba02_b9bc_46ba_b1f6_48d2be5bbd68.slice. Nov 23 23:02:06.964401 kubelet[2782]: I1123 23:02:06.964105 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/186dba02-b9bc-46ba-b1f6-48d2be5bbd68-whisker-ca-bundle\") pod \"whisker-78fbf9698b-q5ccl\" (UID: \"186dba02-b9bc-46ba-b1f6-48d2be5bbd68\") " pod="calico-system/whisker-78fbf9698b-q5ccl" Nov 23 23:02:06.964401 kubelet[2782]: I1123 23:02:06.964186 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5xzl\" (UniqueName: \"kubernetes.io/projected/186dba02-b9bc-46ba-b1f6-48d2be5bbd68-kube-api-access-k5xzl\") pod \"whisker-78fbf9698b-q5ccl\" (UID: \"186dba02-b9bc-46ba-b1f6-48d2be5bbd68\") " pod="calico-system/whisker-78fbf9698b-q5ccl" Nov 23 23:02:06.964401 kubelet[2782]: I1123 23:02:06.964244 2782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/186dba02-b9bc-46ba-b1f6-48d2be5bbd68-whisker-backend-key-pair\") pod \"whisker-78fbf9698b-q5ccl\" (UID: \"186dba02-b9bc-46ba-b1f6-48d2be5bbd68\") " pod="calico-system/whisker-78fbf9698b-q5ccl" Nov 23 23:02:07.186769 containerd[1525]: time="2025-11-23T23:02:07.186619794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78fbf9698b-q5ccl,Uid:186dba02-b9bc-46ba-b1f6-48d2be5bbd68,Namespace:calico-system,Attempt:0,}" Nov 23 23:02:07.392145 systemd-networkd[1413]: cali79b8ebbd131: Link UP Nov 23 23:02:07.393579 systemd-networkd[1413]: cali79b8ebbd131: Gained carrier Nov 23 23:02:07.417206 containerd[1525]: 2025-11-23 23:02:07.219 [INFO][3875] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:02:07.417206 containerd[1525]: 2025-11-23 23:02:07.273 [INFO][3875] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-eth0 whisker-78fbf9698b- calico-system 186dba02-b9bc-46ba-b1f6-48d2be5bbd68 904 0 2025-11-23 23:02:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78fbf9698b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-1-9-52b78fad11 whisker-78fbf9698b-q5ccl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali79b8ebbd131 [] [] }} ContainerID="b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" Namespace="calico-system" Pod="whisker-78fbf9698b-q5ccl" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-" Nov 23 23:02:07.417206 containerd[1525]: 2025-11-23 23:02:07.273 [INFO][3875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" Namespace="calico-system" Pod="whisker-78fbf9698b-q5ccl" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-eth0" Nov 23 23:02:07.417206 containerd[1525]: 2025-11-23 23:02:07.323 [INFO][3887] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" HandleID="k8s-pod-network.b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" Workload="ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-eth0" Nov 23 23:02:07.417543 containerd[1525]: 2025-11-23 23:02:07.323 [INFO][3887] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" HandleID="k8s-pod-network.b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" Workload="ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000255770), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-1-9-52b78fad11", "pod":"whisker-78fbf9698b-q5ccl", "timestamp":"2025-11-23 23:02:07.323277467 +0000 UTC"}, Hostname:"ci-4459-2-1-9-52b78fad11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:07.417543 containerd[1525]: 2025-11-23 23:02:07.323 [INFO][3887] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:07.417543 containerd[1525]: 2025-11-23 23:02:07.323 [INFO][3887] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:07.417543 containerd[1525]: 2025-11-23 23:02:07.323 [INFO][3887] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-9-52b78fad11' Nov 23 23:02:07.417543 containerd[1525]: 2025-11-23 23:02:07.337 [INFO][3887] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:07.417543 containerd[1525]: 2025-11-23 23:02:07.345 [INFO][3887] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:07.417543 containerd[1525]: 2025-11-23 23:02:07.353 [INFO][3887] ipam/ipam.go 511: Trying affinity for 192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:07.417543 containerd[1525]: 2025-11-23 23:02:07.355 [INFO][3887] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:07.417543 containerd[1525]: 2025-11-23 23:02:07.358 [INFO][3887] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:07.417729 containerd[1525]: 2025-11-23 23:02:07.358 [INFO][3887] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:07.417729 containerd[1525]: 2025-11-23 23:02:07.361 [INFO][3887] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808 Nov 23 23:02:07.417729 containerd[1525]: 2025-11-23 23:02:07.367 [INFO][3887] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:07.417729 containerd[1525]: 2025-11-23 23:02:07.375 [INFO][3887] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.1/26] block=192.168.45.0/26 handle="k8s-pod-network.b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:07.417729 containerd[1525]: 2025-11-23 23:02:07.376 [INFO][3887] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.1/26] handle="k8s-pod-network.b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:07.417729 containerd[1525]: 2025-11-23 23:02:07.376 [INFO][3887] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:07.417729 containerd[1525]: 2025-11-23 23:02:07.376 [INFO][3887] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.1/26] IPv6=[] ContainerID="b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" HandleID="k8s-pod-network.b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" Workload="ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-eth0" Nov 23 23:02:07.417855 containerd[1525]: 2025-11-23 23:02:07.380 [INFO][3875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" Namespace="calico-system" Pod="whisker-78fbf9698b-q5ccl" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-eth0", GenerateName:"whisker-78fbf9698b-", Namespace:"calico-system", SelfLink:"", UID:"186dba02-b9bc-46ba-b1f6-48d2be5bbd68", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 2, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78fbf9698b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"", Pod:"whisker-78fbf9698b-q5ccl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali79b8ebbd131", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:07.417855 containerd[1525]: 2025-11-23 23:02:07.380 [INFO][3875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.1/32] ContainerID="b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" Namespace="calico-system" Pod="whisker-78fbf9698b-q5ccl" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-eth0" Nov 23 23:02:07.417931 containerd[1525]: 2025-11-23 23:02:07.380 [INFO][3875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79b8ebbd131 ContainerID="b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" Namespace="calico-system" Pod="whisker-78fbf9698b-q5ccl" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-eth0" Nov 23 23:02:07.417931 containerd[1525]: 2025-11-23 23:02:07.393 [INFO][3875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" Namespace="calico-system" Pod="whisker-78fbf9698b-q5ccl" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-eth0" Nov 23 23:02:07.417980 containerd[1525]: 2025-11-23 23:02:07.393 [INFO][3875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" Namespace="calico-system" Pod="whisker-78fbf9698b-q5ccl" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-eth0", GenerateName:"whisker-78fbf9698b-", Namespace:"calico-system", SelfLink:"", UID:"186dba02-b9bc-46ba-b1f6-48d2be5bbd68", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 2, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78fbf9698b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808", Pod:"whisker-78fbf9698b-q5ccl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.45.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali79b8ebbd131", MAC:"4e:eb:c7:c2:ba:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:07.418026 containerd[1525]: 2025-11-23 23:02:07.413 [INFO][3875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" Namespace="calico-system" Pod="whisker-78fbf9698b-q5ccl" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-whisker--78fbf9698b--q5ccl-eth0" Nov 23 23:02:07.469769 containerd[1525]: time="2025-11-23T23:02:07.469535281Z" level=info msg="connecting to shim b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808" address="unix:///run/containerd/s/b00c7fdd8fbdec1619ccf1fc93c7f7606cf892b2e4c5df259a756ad3032326f3" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:07.559049 systemd[1]: Started cri-containerd-b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808.scope - libcontainer container b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808. Nov 23 23:02:07.573001 kubelet[2782]: I1123 23:02:07.572950 2782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18619a94-547d-4f61-83ee-6c03707255a7" path="/var/lib/kubelet/pods/18619a94-547d-4f61-83ee-6c03707255a7/volumes" Nov 23 23:02:07.668877 containerd[1525]: time="2025-11-23T23:02:07.668822646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78fbf9698b-q5ccl,Uid:186dba02-b9bc-46ba-b1f6-48d2be5bbd68,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4a429a8838744b4a81d8365d6bc1b15727c4b7a7fb0dbffa839150f1e261808\"" Nov 23 23:02:07.674465 containerd[1525]: time="2025-11-23T23:02:07.674147053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:02:08.012533 containerd[1525]: time="2025-11-23T23:02:08.012466386Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:08.014115 containerd[1525]: time="2025-11-23T23:02:08.014049260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:02:08.014254 containerd[1525]: time="2025-11-23T23:02:08.014065180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:02:08.014544 kubelet[2782]: E1123 23:02:08.014496 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:02:08.014638 kubelet[2782]: E1123 23:02:08.014562 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:02:08.022014 kubelet[2782]: E1123 23:02:08.021949 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8713a7dea7e0400190dcc0e99de68523,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k5xzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78fbf9698b-q5ccl_calico-system(186dba02-b9bc-46ba-b1f6-48d2be5bbd68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:08.024881 containerd[1525]: time="2025-11-23T23:02:08.024788500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:02:08.375273 containerd[1525]: time="2025-11-23T23:02:08.375117698Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:08.378169 containerd[1525]: time="2025-11-23T23:02:08.378106567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:02:08.378386 containerd[1525]: time="2025-11-23T23:02:08.378147607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:02:08.378748 kubelet[2782]: E1123 23:02:08.378459 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:02:08.378748 kubelet[2782]: E1123 23:02:08.378509 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:02:08.378849 kubelet[2782]: E1123 23:02:08.378634 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5xzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78fbf9698b-q5ccl_calico-system(186dba02-b9bc-46ba-b1f6-48d2be5bbd68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:08.380118 kubelet[2782]: E1123 23:02:08.380055 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:02:08.769615 kubelet[2782]: E1123 23:02:08.769561 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:02:09.090553 systemd-networkd[1413]: cali79b8ebbd131: Gained IPv6LL Nov 23 23:02:09.562023 containerd[1525]: time="2025-11-23T23:02:09.561530738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf6b75475-8n9bk,Uid:319eb40d-b16d-4daa-b6ab-a4e6de765a83,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:02:09.562715 containerd[1525]: time="2025-11-23T23:02:09.562432617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qzvtm,Uid:147a3fcd-da80-4b14-916a-786fd7363b2a,Namespace:calico-system,Attempt:0,}" Nov 23 23:02:09.749057 systemd-networkd[1413]: cali02c5c356f6a: Link UP Nov 23 23:02:09.750706 systemd-networkd[1413]: cali02c5c356f6a: Gained carrier Nov 23 23:02:09.767198 containerd[1525]: 2025-11-23 23:02:09.614 [INFO][4115] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:02:09.767198 containerd[1525]: 2025-11-23 23:02:09.639 [INFO][4115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-eth0 calico-apiserver-6bf6b75475- calico-apiserver 319eb40d-b16d-4daa-b6ab-a4e6de765a83 835 0 2025-11-23 23:01:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6bf6b75475 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-1-9-52b78fad11 calico-apiserver-6bf6b75475-8n9bk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali02c5c356f6a [] [] }} ContainerID="7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" Namespace="calico-apiserver" Pod="calico-apiserver-6bf6b75475-8n9bk" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-" Nov 23 23:02:09.767198 containerd[1525]: 2025-11-23 23:02:09.639 [INFO][4115] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" Namespace="calico-apiserver" Pod="calico-apiserver-6bf6b75475-8n9bk" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-eth0" Nov 23 23:02:09.767198 containerd[1525]: 2025-11-23 23:02:09.675 [INFO][4140] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" HandleID="k8s-pod-network.7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" Workload="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-eth0" Nov 23 23:02:09.767491 containerd[1525]: 2025-11-23 23:02:09.675 [INFO][4140] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" HandleID="k8s-pod-network.7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" Workload="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-1-9-52b78fad11", "pod":"calico-apiserver-6bf6b75475-8n9bk", "timestamp":"2025-11-23 23:02:09.675723896 +0000 UTC"}, Hostname:"ci-4459-2-1-9-52b78fad11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:09.767491 containerd[1525]: 2025-11-23 23:02:09.675 [INFO][4140] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:09.767491 containerd[1525]: 2025-11-23 23:02:09.675 [INFO][4140] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:09.767491 containerd[1525]: 2025-11-23 23:02:09.676 [INFO][4140] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-9-52b78fad11' Nov 23 23:02:09.767491 containerd[1525]: 2025-11-23 23:02:09.687 [INFO][4140] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.767491 containerd[1525]: 2025-11-23 23:02:09.694 [INFO][4140] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.767491 containerd[1525]: 2025-11-23 23:02:09.701 [INFO][4140] ipam/ipam.go 511: Trying affinity for 192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.767491 containerd[1525]: 2025-11-23 23:02:09.704 [INFO][4140] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.767491 containerd[1525]: 2025-11-23 23:02:09.707 [INFO][4140] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.768692 containerd[1525]: 2025-11-23 23:02:09.707 [INFO][4140] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.768692 containerd[1525]: 2025-11-23 23:02:09.709 [INFO][4140] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c Nov 23 23:02:09.768692 containerd[1525]: 2025-11-23 23:02:09.717 [INFO][4140] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.768692 containerd[1525]: 2025-11-23 23:02:09.728 [INFO][4140] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.2/26] block=192.168.45.0/26 handle="k8s-pod-network.7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.768692 containerd[1525]: 2025-11-23 23:02:09.728 [INFO][4140] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.2/26] handle="k8s-pod-network.7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.768692 containerd[1525]: 2025-11-23 23:02:09.729 [INFO][4140] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:09.768692 containerd[1525]: 2025-11-23 23:02:09.729 [INFO][4140] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.2/26] IPv6=[] ContainerID="7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" HandleID="k8s-pod-network.7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" Workload="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-eth0" Nov 23 23:02:09.768865 containerd[1525]: 2025-11-23 23:02:09.734 [INFO][4115] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" Namespace="calico-apiserver" Pod="calico-apiserver-6bf6b75475-8n9bk" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-eth0", GenerateName:"calico-apiserver-6bf6b75475-", Namespace:"calico-apiserver", SelfLink:"", UID:"319eb40d-b16d-4daa-b6ab-a4e6de765a83", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf6b75475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"", Pod:"calico-apiserver-6bf6b75475-8n9bk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02c5c356f6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:09.768932 containerd[1525]: 2025-11-23 23:02:09.734 [INFO][4115] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.2/32] ContainerID="7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" Namespace="calico-apiserver" Pod="calico-apiserver-6bf6b75475-8n9bk" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-eth0" Nov 23 23:02:09.768932 containerd[1525]: 2025-11-23 23:02:09.734 [INFO][4115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02c5c356f6a ContainerID="7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" Namespace="calico-apiserver" Pod="calico-apiserver-6bf6b75475-8n9bk" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-eth0" Nov 23 23:02:09.768932 containerd[1525]: 2025-11-23 23:02:09.751 [INFO][4115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" Namespace="calico-apiserver" Pod="calico-apiserver-6bf6b75475-8n9bk" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-eth0" Nov 23 23:02:09.768990 containerd[1525]: 2025-11-23 23:02:09.752 [INFO][4115] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" Namespace="calico-apiserver" Pod="calico-apiserver-6bf6b75475-8n9bk" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-eth0", GenerateName:"calico-apiserver-6bf6b75475-", Namespace:"calico-apiserver", SelfLink:"", UID:"319eb40d-b16d-4daa-b6ab-a4e6de765a83", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6bf6b75475", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c", Pod:"calico-apiserver-6bf6b75475-8n9bk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02c5c356f6a", MAC:"9e:a0:fb:d5:75:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:09.769038 containerd[1525]: 2025-11-23 23:02:09.763 [INFO][4115] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" Namespace="calico-apiserver" Pod="calico-apiserver-6bf6b75475-8n9bk" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--6bf6b75475--8n9bk-eth0" Nov 23 23:02:09.802083 containerd[1525]: time="2025-11-23T23:02:09.801963996Z" level=info msg="connecting to shim 7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c" address="unix:///run/containerd/s/25791185386aa594e74bc2253a410dff527d9eb2918ec07491ee2b858098a71a" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:09.837565 systemd[1]: Started cri-containerd-7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c.scope - libcontainer container 7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c. Nov 23 23:02:09.859724 systemd-networkd[1413]: caliac2f06090f5: Link UP Nov 23 23:02:09.861379 systemd-networkd[1413]: caliac2f06090f5: Gained carrier Nov 23 23:02:09.882731 containerd[1525]: 2025-11-23 23:02:09.617 [INFO][4119] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:02:09.882731 containerd[1525]: 2025-11-23 23:02:09.641 [INFO][4119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-eth0 goldmane-666569f655- calico-system 147a3fcd-da80-4b14-916a-786fd7363b2a 839 0 2025-11-23 23:01:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-1-9-52b78fad11 goldmane-666569f655-qzvtm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliac2f06090f5 [] [] }} ContainerID="2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" Namespace="calico-system" Pod="goldmane-666569f655-qzvtm" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-" Nov 23 23:02:09.882731 containerd[1525]: 2025-11-23 23:02:09.641 [INFO][4119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" Namespace="calico-system" Pod="goldmane-666569f655-qzvtm" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-eth0" Nov 23 23:02:09.882731 containerd[1525]: 2025-11-23 23:02:09.677 [INFO][4142] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" HandleID="k8s-pod-network.2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" Workload="ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-eth0" Nov 23 23:02:09.883282 containerd[1525]: 2025-11-23 23:02:09.677 [INFO][4142] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" HandleID="k8s-pod-network.2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" Workload="ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb6d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-1-9-52b78fad11", "pod":"goldmane-666569f655-qzvtm", "timestamp":"2025-11-23 23:02:09.677167094 +0000 UTC"}, Hostname:"ci-4459-2-1-9-52b78fad11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:09.883282 containerd[1525]: 2025-11-23 23:02:09.677 [INFO][4142] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:09.883282 containerd[1525]: 2025-11-23 23:02:09.728 [INFO][4142] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:09.883282 containerd[1525]: 2025-11-23 23:02:09.729 [INFO][4142] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-9-52b78fad11' Nov 23 23:02:09.883282 containerd[1525]: 2025-11-23 23:02:09.791 [INFO][4142] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.883282 containerd[1525]: 2025-11-23 23:02:09.802 [INFO][4142] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.883282 containerd[1525]: 2025-11-23 23:02:09.817 [INFO][4142] ipam/ipam.go 511: Trying affinity for 192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.883282 containerd[1525]: 2025-11-23 23:02:09.822 [INFO][4142] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.883282 containerd[1525]: 2025-11-23 23:02:09.827 [INFO][4142] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.884139 containerd[1525]: 2025-11-23 23:02:09.827 [INFO][4142] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.884139 containerd[1525]: 2025-11-23 23:02:09.832 [INFO][4142] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f Nov 23 23:02:09.884139 containerd[1525]: 2025-11-23 23:02:09.840 [INFO][4142] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.884139 containerd[1525]: 2025-11-23 23:02:09.851 [INFO][4142] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.3/26] block=192.168.45.0/26 handle="k8s-pod-network.2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.884139 containerd[1525]: 2025-11-23 23:02:09.851 [INFO][4142] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.3/26] handle="k8s-pod-network.2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:09.884139 containerd[1525]: 2025-11-23 23:02:09.851 [INFO][4142] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:09.884139 containerd[1525]: 2025-11-23 23:02:09.852 [INFO][4142] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.3/26] IPv6=[] ContainerID="2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" HandleID="k8s-pod-network.2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" Workload="ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-eth0" Nov 23 23:02:09.884322 containerd[1525]: 2025-11-23 23:02:09.855 [INFO][4119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" Namespace="calico-system" Pod="goldmane-666569f655-qzvtm" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"147a3fcd-da80-4b14-916a-786fd7363b2a", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"", Pod:"goldmane-666569f655-qzvtm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliac2f06090f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:09.884731 containerd[1525]: 2025-11-23 23:02:09.855 [INFO][4119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.3/32] ContainerID="2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" Namespace="calico-system" Pod="goldmane-666569f655-qzvtm" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-eth0" Nov 23 23:02:09.884731 containerd[1525]: 2025-11-23 23:02:09.855 [INFO][4119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac2f06090f5 ContainerID="2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" Namespace="calico-system" Pod="goldmane-666569f655-qzvtm" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-eth0" Nov 23 23:02:09.884731 containerd[1525]: 2025-11-23 23:02:09.862 [INFO][4119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" Namespace="calico-system" Pod="goldmane-666569f655-qzvtm" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-eth0" Nov 23 23:02:09.884846 containerd[1525]: 2025-11-23 23:02:09.862 [INFO][4119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" Namespace="calico-system" Pod="goldmane-666569f655-qzvtm" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"147a3fcd-da80-4b14-916a-786fd7363b2a", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f", Pod:"goldmane-666569f655-qzvtm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.45.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliac2f06090f5", MAC:"3a:00:76:89:01:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:09.885233 containerd[1525]: 2025-11-23 23:02:09.878 [INFO][4119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" Namespace="calico-system" Pod="goldmane-666569f655-qzvtm" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-goldmane--666569f655--qzvtm-eth0" Nov 23 23:02:09.922569 containerd[1525]: time="2025-11-23T23:02:09.922526624Z" level=info msg="connecting to shim 2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f" address="unix:///run/containerd/s/305d5e2731a2de31e97193ad5d5387456c5cfe5bc6f978fa5039963e851eaa5a" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:09.933517 containerd[1525]: time="2025-11-23T23:02:09.933421328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6bf6b75475-8n9bk,Uid:319eb40d-b16d-4daa-b6ab-a4e6de765a83,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7e2ef279e279030f91096d233a2502a0ab906a70881ce25e6f91fab5f8d6c14c\"" Nov 23 23:02:09.936892 containerd[1525]: time="2025-11-23T23:02:09.936851483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:09.958860 systemd[1]: Started cri-containerd-2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f.scope - libcontainer container 2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f. Nov 23 23:02:10.038042 containerd[1525]: time="2025-11-23T23:02:10.037994385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qzvtm,Uid:147a3fcd-da80-4b14-916a-786fd7363b2a,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f25a3b687e213f78e3b9cfffd7cbf384bab036b48d624c518987079eacfe87f\"" Nov 23 23:02:10.289897 containerd[1525]: time="2025-11-23T23:02:10.289818358Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:10.291522 containerd[1525]: time="2025-11-23T23:02:10.291210919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:10.291522 containerd[1525]: time="2025-11-23T23:02:10.291454079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:10.291764 kubelet[2782]: E1123 23:02:10.291687 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:10.292570 kubelet[2782]: E1123 23:02:10.291770 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:10.292722 containerd[1525]: time="2025-11-23T23:02:10.292184640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:02:10.293411 kubelet[2782]: E1123 23:02:10.293234 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pr9fc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf6b75475-8n9bk_calico-apiserver(319eb40d-b16d-4daa-b6ab-a4e6de765a83): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:10.295048 kubelet[2782]: E1123 23:02:10.294958 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:02:10.562803 containerd[1525]: time="2025-11-23T23:02:10.562605149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dp9vd,Uid:4f684462-e515-4517-a943-3140e39f12d8,Namespace:kube-system,Attempt:0,}" Nov 23 23:02:10.625619 containerd[1525]: time="2025-11-23T23:02:10.625557362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:10.626853 containerd[1525]: time="2025-11-23T23:02:10.626773443Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:02:10.626853 containerd[1525]: time="2025-11-23T23:02:10.626819083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:10.627038 kubelet[2782]: E1123 23:02:10.626999 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:02:10.627092 kubelet[2782]: E1123 23:02:10.627051 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:02:10.627225 kubelet[2782]: E1123 23:02:10.627176 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsmnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qzvtm_calico-system(147a3fcd-da80-4b14-916a-786fd7363b2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:10.628554 kubelet[2782]: E1123 23:02:10.628448 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:02:10.716205 systemd-networkd[1413]: cali71aa985ffd3: Link UP Nov 23 23:02:10.717105 systemd-networkd[1413]: cali71aa985ffd3: Gained carrier Nov 23 23:02:10.738029 containerd[1525]: 2025-11-23 23:02:10.595 [INFO][4276] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:02:10.738029 containerd[1525]: 2025-11-23 23:02:10.613 [INFO][4276] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-eth0 coredns-674b8bbfcf- kube-system 4f684462-e515-4517-a943-3140e39f12d8 833 0 2025-11-23 23:01:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-1-9-52b78fad11 coredns-674b8bbfcf-dp9vd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali71aa985ffd3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dp9vd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-" Nov 23 23:02:10.738029 containerd[1525]: 2025-11-23 23:02:10.613 [INFO][4276] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dp9vd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-eth0" Nov 23 23:02:10.738029 containerd[1525]: 2025-11-23 23:02:10.647 [INFO][4289] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" HandleID="k8s-pod-network.e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" Workload="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-eth0" Nov 23 23:02:10.738292 containerd[1525]: 2025-11-23 23:02:10.647 [INFO][4289] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" HandleID="k8s-pod-network.e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" Workload="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-1-9-52b78fad11", "pod":"coredns-674b8bbfcf-dp9vd", "timestamp":"2025-11-23 23:02:10.647770581 +0000 UTC"}, Hostname:"ci-4459-2-1-9-52b78fad11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:10.738292 containerd[1525]: 2025-11-23 23:02:10.648 [INFO][4289] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:10.738292 containerd[1525]: 2025-11-23 23:02:10.648 [INFO][4289] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:10.738292 containerd[1525]: 2025-11-23 23:02:10.648 [INFO][4289] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-9-52b78fad11' Nov 23 23:02:10.738292 containerd[1525]: 2025-11-23 23:02:10.660 [INFO][4289] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:10.738292 containerd[1525]: 2025-11-23 23:02:10.671 [INFO][4289] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:10.738292 containerd[1525]: 2025-11-23 23:02:10.678 [INFO][4289] ipam/ipam.go 511: Trying affinity for 192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:10.738292 containerd[1525]: 2025-11-23 23:02:10.681 [INFO][4289] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:10.738292 containerd[1525]: 2025-11-23 23:02:10.684 [INFO][4289] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:10.739128 containerd[1525]: 2025-11-23 23:02:10.684 [INFO][4289] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:10.739128 containerd[1525]: 2025-11-23 23:02:10.686 [INFO][4289] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4 Nov 23 23:02:10.739128 containerd[1525]: 2025-11-23 23:02:10.696 [INFO][4289] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:10.739128 containerd[1525]: 2025-11-23 23:02:10.707 [INFO][4289] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.4/26] block=192.168.45.0/26 handle="k8s-pod-network.e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:10.739128 containerd[1525]: 2025-11-23 23:02:10.707 [INFO][4289] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.4/26] handle="k8s-pod-network.e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:10.739128 containerd[1525]: 2025-11-23 23:02:10.707 [INFO][4289] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:10.739128 containerd[1525]: 2025-11-23 23:02:10.707 [INFO][4289] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.4/26] IPv6=[] ContainerID="e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" HandleID="k8s-pod-network.e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" Workload="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-eth0" Nov 23 23:02:10.739415 containerd[1525]: 2025-11-23 23:02:10.711 [INFO][4276] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dp9vd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f684462-e515-4517-a943-3140e39f12d8", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"", Pod:"coredns-674b8bbfcf-dp9vd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71aa985ffd3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:10.739415 containerd[1525]: 2025-11-23 23:02:10.711 [INFO][4276] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.4/32] ContainerID="e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dp9vd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-eth0" Nov 23 23:02:10.739415 containerd[1525]: 2025-11-23 23:02:10.711 [INFO][4276] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71aa985ffd3 ContainerID="e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dp9vd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-eth0" Nov 23 23:02:10.739415 containerd[1525]: 2025-11-23 23:02:10.716 [INFO][4276] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dp9vd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-eth0" Nov 23 23:02:10.739415 containerd[1525]: 2025-11-23 23:02:10.719 [INFO][4276] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dp9vd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4f684462-e515-4517-a943-3140e39f12d8", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4", Pod:"coredns-674b8bbfcf-dp9vd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71aa985ffd3", MAC:"36:04:88:f0:1d:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:10.739415 containerd[1525]: 2025-11-23 23:02:10.733 [INFO][4276] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-dp9vd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--dp9vd-eth0" Nov 23 23:02:10.777077 kubelet[2782]: E1123 23:02:10.777030 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:02:10.781023 kubelet[2782]: E1123 23:02:10.780984 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:02:10.789580 containerd[1525]: time="2025-11-23T23:02:10.789530061Z" level=info msg="connecting to shim e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4" address="unix:///run/containerd/s/e8c209b936b66cd9ff4c2257cffbb5c277ce6e878fcd65eb3fdaa0e636de0004" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:10.834611 systemd[1]: Started cri-containerd-e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4.scope - libcontainer container e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4. Nov 23 23:02:10.889071 containerd[1525]: time="2025-11-23T23:02:10.889009665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dp9vd,Uid:4f684462-e515-4517-a943-3140e39f12d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4\"" Nov 23 23:02:10.897315 containerd[1525]: time="2025-11-23T23:02:10.897265352Z" level=info msg="CreateContainer within sandbox \"e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:02:10.908901 containerd[1525]: time="2025-11-23T23:02:10.908011881Z" level=info msg="Container 302b85819e0472b7a2a4cf343c5cc51358b1967f86463f070d94b00f884a08ff: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:10.919195 containerd[1525]: time="2025-11-23T23:02:10.919153771Z" level=info msg="CreateContainer within sandbox \"e9f346ea154b7e18cc17b185afe1d58ad05d6e2ad07f2dd4f531c3a5029bcba4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"302b85819e0472b7a2a4cf343c5cc51358b1967f86463f070d94b00f884a08ff\"" Nov 23 23:02:10.920107 containerd[1525]: time="2025-11-23T23:02:10.920081052Z" level=info msg="StartContainer for \"302b85819e0472b7a2a4cf343c5cc51358b1967f86463f070d94b00f884a08ff\"" Nov 23 23:02:10.922719 containerd[1525]: time="2025-11-23T23:02:10.922691494Z" level=info msg="connecting to shim 302b85819e0472b7a2a4cf343c5cc51358b1967f86463f070d94b00f884a08ff" address="unix:///run/containerd/s/e8c209b936b66cd9ff4c2257cffbb5c277ce6e878fcd65eb3fdaa0e636de0004" protocol=ttrpc version=3 Nov 23 23:02:10.945942 systemd-networkd[1413]: cali02c5c356f6a: Gained IPv6LL Nov 23 23:02:10.946688 systemd[1]: Started cri-containerd-302b85819e0472b7a2a4cf343c5cc51358b1967f86463f070d94b00f884a08ff.scope - libcontainer container 302b85819e0472b7a2a4cf343c5cc51358b1967f86463f070d94b00f884a08ff. Nov 23 23:02:10.993722 containerd[1525]: time="2025-11-23T23:02:10.993682274Z" level=info msg="StartContainer for \"302b85819e0472b7a2a4cf343c5cc51358b1967f86463f070d94b00f884a08ff\" returns successfully" Nov 23 23:02:11.561850 containerd[1525]: time="2025-11-23T23:02:11.561776592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74998f44b6-zvwmg,Uid:c7c9f1f0-a20f-4cd1-87de-e2a910e5566a,Namespace:calico-system,Attempt:0,}" Nov 23 23:02:11.711726 systemd-networkd[1413]: calicbd1674381b: Link UP Nov 23 23:02:11.712218 systemd-networkd[1413]: calicbd1674381b: Gained carrier Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.595 [INFO][4408] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.617 [INFO][4408] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-eth0 calico-kube-controllers-74998f44b6- calico-system c7c9f1f0-a20f-4cd1-87de-e2a910e5566a 840 0 2025-11-23 23:01:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74998f44b6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-1-9-52b78fad11 calico-kube-controllers-74998f44b6-zvwmg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicbd1674381b [] [] }} ContainerID="1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" Namespace="calico-system" Pod="calico-kube-controllers-74998f44b6-zvwmg" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-" Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.617 [INFO][4408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" Namespace="calico-system" Pod="calico-kube-controllers-74998f44b6-zvwmg" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-eth0" Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.647 [INFO][4421] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" HandleID="k8s-pod-network.1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" Workload="ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-eth0" Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.648 [INFO][4421] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" HandleID="k8s-pod-network.1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" Workload="ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-1-9-52b78fad11", "pod":"calico-kube-controllers-74998f44b6-zvwmg", "timestamp":"2025-11-23 23:02:11.647781454 +0000 UTC"}, Hostname:"ci-4459-2-1-9-52b78fad11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.648 [INFO][4421] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.648 [INFO][4421] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.648 [INFO][4421] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-9-52b78fad11' Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.659 [INFO][4421] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.666 [INFO][4421] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.673 [INFO][4421] ipam/ipam.go 511: Trying affinity for 192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.680 [INFO][4421] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.684 [INFO][4421] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.684 [INFO][4421] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.687 [INFO][4421] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113 Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.694 [INFO][4421] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.706 [INFO][4421] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.5/26] block=192.168.45.0/26 handle="k8s-pod-network.1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.706 [INFO][4421] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.5/26] handle="k8s-pod-network.1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.707 [INFO][4421] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:11.731932 containerd[1525]: 2025-11-23 23:02:11.707 [INFO][4421] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.5/26] IPv6=[] ContainerID="1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" HandleID="k8s-pod-network.1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" Workload="ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-eth0" Nov 23 23:02:11.732973 containerd[1525]: 2025-11-23 23:02:11.709 [INFO][4408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" Namespace="calico-system" Pod="calico-kube-controllers-74998f44b6-zvwmg" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-eth0", GenerateName:"calico-kube-controllers-74998f44b6-", Namespace:"calico-system", SelfLink:"", UID:"c7c9f1f0-a20f-4cd1-87de-e2a910e5566a", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74998f44b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"", Pod:"calico-kube-controllers-74998f44b6-zvwmg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicbd1674381b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:11.732973 containerd[1525]: 2025-11-23 23:02:11.709 [INFO][4408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.5/32] ContainerID="1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" Namespace="calico-system" Pod="calico-kube-controllers-74998f44b6-zvwmg" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-eth0" Nov 23 23:02:11.732973 containerd[1525]: 2025-11-23 23:02:11.709 [INFO][4408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbd1674381b ContainerID="1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" Namespace="calico-system" Pod="calico-kube-controllers-74998f44b6-zvwmg" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-eth0" Nov 23 23:02:11.732973 containerd[1525]: 2025-11-23 23:02:11.712 [INFO][4408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" Namespace="calico-system" Pod="calico-kube-controllers-74998f44b6-zvwmg" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-eth0" Nov 23 23:02:11.732973 containerd[1525]: 2025-11-23 23:02:11.712 [INFO][4408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" Namespace="calico-system" Pod="calico-kube-controllers-74998f44b6-zvwmg" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-eth0", GenerateName:"calico-kube-controllers-74998f44b6-", Namespace:"calico-system", SelfLink:"", UID:"c7c9f1f0-a20f-4cd1-87de-e2a910e5566a", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74998f44b6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113", Pod:"calico-kube-controllers-74998f44b6-zvwmg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicbd1674381b", MAC:"96:2f:b1:80:01:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:11.732973 containerd[1525]: 2025-11-23 23:02:11.729 [INFO][4408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" Namespace="calico-system" Pod="calico-kube-controllers-74998f44b6-zvwmg" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--kube--controllers--74998f44b6--zvwmg-eth0" Nov 23 23:02:11.760683 containerd[1525]: time="2025-11-23T23:02:11.760494318Z" level=info msg="connecting to shim 1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113" address="unix:///run/containerd/s/c680e9f45509669918ab6b5d11da73cd6c93d156eb3583d89095e8cca67c21f9" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:11.792439 kubelet[2782]: E1123 23:02:11.792393 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:02:11.794025 kubelet[2782]: E1123 23:02:11.793619 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:02:11.800587 systemd[1]: Started cri-containerd-1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113.scope - libcontainer container 1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113. Nov 23 23:02:11.862894 kubelet[2782]: I1123 23:02:11.861640 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dp9vd" podStartSLOduration=41.861623346 podStartE2EDuration="41.861623346s" podCreationTimestamp="2025-11-23 23:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:02:11.845531617 +0000 UTC m=+48.427937401" watchObservedRunningTime="2025-11-23 23:02:11.861623346 +0000 UTC m=+48.444029090" Nov 23 23:02:11.906386 systemd-networkd[1413]: caliac2f06090f5: Gained IPv6LL Nov 23 23:02:11.922754 containerd[1525]: time="2025-11-23T23:02:11.922609692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74998f44b6-zvwmg,Uid:c7c9f1f0-a20f-4cd1-87de-e2a910e5566a,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e377ba9e3d86a59afc931c8a2b9a462698afa4748a645713fbc5f3a2df26113\"" Nov 23 23:02:11.928123 containerd[1525]: time="2025-11-23T23:02:11.926552744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:02:11.982316 kubelet[2782]: I1123 23:02:11.982268 2782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:02:12.249559 containerd[1525]: time="2025-11-23T23:02:12.249495941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:12.251237 containerd[1525]: time="2025-11-23T23:02:12.251141709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:02:12.252432 containerd[1525]: time="2025-11-23T23:02:12.252382636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:02:12.253803 kubelet[2782]: E1123 23:02:12.252713 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:02:12.253803 kubelet[2782]: E1123 23:02:12.252768 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:02:12.254390 kubelet[2782]: E1123 23:02:12.254277 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64ddx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74998f44b6-zvwmg_calico-system(c7c9f1f0-a20f-4cd1-87de-e2a910e5566a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:12.255751 kubelet[2782]: E1123 23:02:12.255697 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:02:12.482602 systemd-networkd[1413]: cali71aa985ffd3: Gained IPv6LL Nov 23 23:02:12.562246 containerd[1525]: time="2025-11-23T23:02:12.562135161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-th5hd,Uid:64382cfa-ecd0-42e3-ae79-135db5ecbed0,Namespace:kube-system,Attempt:0,}" Nov 23 23:02:12.562790 containerd[1525]: time="2025-11-23T23:02:12.562757164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnfrc,Uid:16769e22-23bd-4950-9cc0-72958bdfa903,Namespace:calico-system,Attempt:0,}" Nov 23 23:02:12.816970 kubelet[2782]: E1123 23:02:12.816679 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:02:12.822711 systemd-networkd[1413]: calice7c340aaef: Link UP Nov 23 23:02:12.823252 systemd-networkd[1413]: calice7c340aaef: Gained carrier Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.647 [INFO][4520] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-eth0 csi-node-driver- calico-system 16769e22-23bd-4950-9cc0-72958bdfa903 738 0 2025-11-23 23:01:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-1-9-52b78fad11 csi-node-driver-jnfrc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calice7c340aaef [] [] }} ContainerID="274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" Namespace="calico-system" Pod="csi-node-driver-jnfrc" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-" Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.648 [INFO][4520] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" Namespace="calico-system" Pod="csi-node-driver-jnfrc" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-eth0" Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.704 [INFO][4544] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" HandleID="k8s-pod-network.274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" Workload="ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-eth0" Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.704 [INFO][4544] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" HandleID="k8s-pod-network.274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" Workload="ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1020), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-1-9-52b78fad11", "pod":"csi-node-driver-jnfrc", "timestamp":"2025-11-23 23:02:12.704482619 +0000 UTC"}, Hostname:"ci-4459-2-1-9-52b78fad11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.704 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.704 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.704 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-9-52b78fad11' Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.724 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.735 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.744 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.748 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.755 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.755 [INFO][4544] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.760 [INFO][4544] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3 Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.766 [INFO][4544] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.777 [INFO][4544] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.6/26] block=192.168.45.0/26 handle="k8s-pod-network.274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.777 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.6/26] handle="k8s-pod-network.274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.778 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:12.857001 containerd[1525]: 2025-11-23 23:02:12.778 [INFO][4544] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.6/26] IPv6=[] ContainerID="274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" HandleID="k8s-pod-network.274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" Workload="ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-eth0" Nov 23 23:02:12.858881 containerd[1525]: 2025-11-23 23:02:12.782 [INFO][4520] cni-plugin/k8s.go 418: Populated endpoint ContainerID="274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" Namespace="calico-system" Pod="csi-node-driver-jnfrc" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16769e22-23bd-4950-9cc0-72958bdfa903", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"", Pod:"csi-node-driver-jnfrc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice7c340aaef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:12.858881 containerd[1525]: 2025-11-23 23:02:12.782 [INFO][4520] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.6/32] ContainerID="274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" Namespace="calico-system" Pod="csi-node-driver-jnfrc" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-eth0" Nov 23 23:02:12.858881 containerd[1525]: 2025-11-23 23:02:12.782 [INFO][4520] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice7c340aaef ContainerID="274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" Namespace="calico-system" Pod="csi-node-driver-jnfrc" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-eth0" Nov 23 23:02:12.858881 containerd[1525]: 2025-11-23 23:02:12.824 [INFO][4520] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" Namespace="calico-system" Pod="csi-node-driver-jnfrc" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-eth0" Nov 23 23:02:12.858881 containerd[1525]: 2025-11-23 23:02:12.826 [INFO][4520] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" Namespace="calico-system" Pod="csi-node-driver-jnfrc" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16769e22-23bd-4950-9cc0-72958bdfa903", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3", Pod:"csi-node-driver-jnfrc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.45.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice7c340aaef", MAC:"22:07:a6:55:20:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:12.858881 containerd[1525]: 2025-11-23 23:02:12.850 [INFO][4520] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" Namespace="calico-system" Pod="csi-node-driver-jnfrc" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-csi--node--driver--jnfrc-eth0" Nov 23 23:02:12.916552 containerd[1525]: time="2025-11-23T23:02:12.916497838Z" level=info msg="connecting to shim 274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3" address="unix:///run/containerd/s/2525ff33ee8c780376138b2b7179112f7129e8f43fa5f81ce7f1298d71398312" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:12.931839 systemd-networkd[1413]: cali2cd85d065a1: Link UP Nov 23 23:02:12.936014 systemd-networkd[1413]: cali2cd85d065a1: Gained carrier Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.687 [INFO][4519] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-eth0 coredns-674b8bbfcf- kube-system 64382cfa-ecd0-42e3-ae79-135db5ecbed0 827 0 2025-11-23 23:01:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-1-9-52b78fad11 coredns-674b8bbfcf-th5hd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2cd85d065a1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" Namespace="kube-system" Pod="coredns-674b8bbfcf-th5hd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-" Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.690 [INFO][4519] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" Namespace="kube-system" Pod="coredns-674b8bbfcf-th5hd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-eth0" Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.768 [INFO][4553] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" HandleID="k8s-pod-network.10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" Workload="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-eth0" Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.768 [INFO][4553] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" HandleID="k8s-pod-network.10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" Workload="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b780), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-1-9-52b78fad11", "pod":"coredns-674b8bbfcf-th5hd", "timestamp":"2025-11-23 23:02:12.765324374 +0000 UTC"}, Hostname:"ci-4459-2-1-9-52b78fad11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.768 [INFO][4553] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.777 [INFO][4553] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.779 [INFO][4553] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-9-52b78fad11' Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.826 [INFO][4553] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.856 [INFO][4553] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.867 [INFO][4553] ipam/ipam.go 511: Trying affinity for 192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.871 [INFO][4553] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.875 [INFO][4553] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.876 [INFO][4553] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.880 [INFO][4553] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2 Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.891 [INFO][4553] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.908 [INFO][4553] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.7/26] block=192.168.45.0/26 handle="k8s-pod-network.10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.908 [INFO][4553] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.7/26] handle="k8s-pod-network.10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.908 [INFO][4553] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:12.990316 containerd[1525]: 2025-11-23 23:02:12.908 [INFO][4553] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.7/26] IPv6=[] ContainerID="10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" HandleID="k8s-pod-network.10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" Workload="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-eth0" Nov 23 23:02:12.993288 containerd[1525]: 2025-11-23 23:02:12.918 [INFO][4519] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" Namespace="kube-system" Pod="coredns-674b8bbfcf-th5hd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"64382cfa-ecd0-42e3-ae79-135db5ecbed0", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"", Pod:"coredns-674b8bbfcf-th5hd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2cd85d065a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:12.993288 containerd[1525]: 2025-11-23 23:02:12.919 [INFO][4519] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.7/32] ContainerID="10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" Namespace="kube-system" Pod="coredns-674b8bbfcf-th5hd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-eth0" Nov 23 23:02:12.993288 containerd[1525]: 2025-11-23 23:02:12.919 [INFO][4519] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cd85d065a1 ContainerID="10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" Namespace="kube-system" Pod="coredns-674b8bbfcf-th5hd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-eth0" Nov 23 23:02:12.993288 containerd[1525]: 2025-11-23 23:02:12.937 [INFO][4519] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" Namespace="kube-system" Pod="coredns-674b8bbfcf-th5hd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-eth0" Nov 23 23:02:12.993288 containerd[1525]: 2025-11-23 23:02:12.942 [INFO][4519] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" Namespace="kube-system" Pod="coredns-674b8bbfcf-th5hd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"64382cfa-ecd0-42e3-ae79-135db5ecbed0", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2", Pod:"coredns-674b8bbfcf-th5hd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2cd85d065a1", MAC:"4a:50:ce:0b:f5:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:12.993288 containerd[1525]: 2025-11-23 23:02:12.967 [INFO][4519] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" Namespace="kube-system" Pod="coredns-674b8bbfcf-th5hd" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-coredns--674b8bbfcf--th5hd-eth0" Nov 23 23:02:13.062884 systemd[1]: Started cri-containerd-274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3.scope - libcontainer container 274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3. Nov 23 23:02:13.074468 containerd[1525]: time="2025-11-23T23:02:13.074055287Z" level=info msg="connecting to shim 10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2" address="unix:///run/containerd/s/a47677cbe361130fc7d9efef2ccce3d71b19df14aa053ee073401f84ffc303e3" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:13.172728 systemd[1]: Started cri-containerd-10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2.scope - libcontainer container 10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2. Nov 23 23:02:13.249532 systemd-networkd[1413]: calicbd1674381b: Gained IPv6LL Nov 23 23:02:13.288491 containerd[1525]: time="2025-11-23T23:02:13.288273440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnfrc,Uid:16769e22-23bd-4950-9cc0-72958bdfa903,Namespace:calico-system,Attempt:0,} returns sandbox id \"274f5e2a35d6b0a54984b0625056d385744df77520f5fb57dc1dc5ec1d2f24b3\"" Nov 23 23:02:13.294483 containerd[1525]: time="2025-11-23T23:02:13.294440524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-th5hd,Uid:64382cfa-ecd0-42e3-ae79-135db5ecbed0,Namespace:kube-system,Attempt:0,} returns sandbox id \"10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2\"" Nov 23 23:02:13.294863 containerd[1525]: time="2025-11-23T23:02:13.294765527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:02:13.304676 containerd[1525]: time="2025-11-23T23:02:13.304635718Z" level=info msg="CreateContainer within sandbox \"10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:02:13.333557 containerd[1525]: time="2025-11-23T23:02:13.333050604Z" level=info msg="Container 491ca3199643cd77152702336f753aa39abaa810b7e2f89142dce161136e6ab4: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:13.341765 containerd[1525]: time="2025-11-23T23:02:13.341607226Z" level=info msg="CreateContainer within sandbox \"10fdd29ef4a0520f4b7315a80b7e03ac1e89b5c64f78e9d228a686059f2e11c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"491ca3199643cd77152702336f753aa39abaa810b7e2f89142dce161136e6ab4\"" Nov 23 23:02:13.343177 containerd[1525]: time="2025-11-23T23:02:13.343137037Z" level=info msg="StartContainer for \"491ca3199643cd77152702336f753aa39abaa810b7e2f89142dce161136e6ab4\"" Nov 23 23:02:13.344849 containerd[1525]: time="2025-11-23T23:02:13.344685689Z" level=info msg="connecting to shim 491ca3199643cd77152702336f753aa39abaa810b7e2f89142dce161136e6ab4" address="unix:///run/containerd/s/a47677cbe361130fc7d9efef2ccce3d71b19df14aa053ee073401f84ffc303e3" protocol=ttrpc version=3 Nov 23 23:02:13.374577 systemd[1]: Started cri-containerd-491ca3199643cd77152702336f753aa39abaa810b7e2f89142dce161136e6ab4.scope - libcontainer container 491ca3199643cd77152702336f753aa39abaa810b7e2f89142dce161136e6ab4. Nov 23 23:02:13.423905 containerd[1525]: time="2025-11-23T23:02:13.423831062Z" level=info msg="StartContainer for \"491ca3199643cd77152702336f753aa39abaa810b7e2f89142dce161136e6ab4\" returns successfully" Nov 23 23:02:13.554488 systemd-networkd[1413]: vxlan.calico: Link UP Nov 23 23:02:13.554499 systemd-networkd[1413]: vxlan.calico: Gained carrier Nov 23 23:02:13.567885 containerd[1525]: time="2025-11-23T23:02:13.567720906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747559d9d9-cwq4m,Uid:5a8e0f96-f5fe-436e-9782-031ed12b446f,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:02:13.569646 containerd[1525]: time="2025-11-23T23:02:13.569232237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747559d9d9-fscb7,Uid:8d85c999-e9d4-4632-99f5-fa0f1c92756a,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:02:13.637583 containerd[1525]: time="2025-11-23T23:02:13.637318210Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:13.641898 containerd[1525]: time="2025-11-23T23:02:13.641845483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:02:13.642123 containerd[1525]: time="2025-11-23T23:02:13.642105485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:02:13.642771 kubelet[2782]: E1123 23:02:13.642302 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:13.642771 kubelet[2782]: E1123 23:02:13.642383 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:13.642771 kubelet[2782]: E1123 23:02:13.642503 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf4f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnfrc_calico-system(16769e22-23bd-4950-9cc0-72958bdfa903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:13.647465 containerd[1525]: time="2025-11-23T23:02:13.647413923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:02:13.834024 kubelet[2782]: E1123 23:02:13.833894 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:02:13.870315 kubelet[2782]: I1123 23:02:13.869079 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-th5hd" podStartSLOduration=43.86905969 podStartE2EDuration="43.86905969s" podCreationTimestamp="2025-11-23 23:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:02:13.847465694 +0000 UTC m=+50.429871478" watchObservedRunningTime="2025-11-23 23:02:13.86905969 +0000 UTC m=+50.451465474" Nov 23 23:02:13.887179 systemd-networkd[1413]: calif0dbf699efd: Link UP Nov 23 23:02:13.890449 systemd-networkd[1413]: calif0dbf699efd: Gained carrier Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.701 [INFO][4747] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-eth0 calico-apiserver-747559d9d9- calico-apiserver 8d85c999-e9d4-4632-99f5-fa0f1c92756a 836 0 2025-11-23 23:01:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:747559d9d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-1-9-52b78fad11 calico-apiserver-747559d9d9-fscb7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif0dbf699efd [] [] }} ContainerID="c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-fscb7" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-" Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.702 [INFO][4747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-fscb7" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-eth0" Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.775 [INFO][4780] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" HandleID="k8s-pod-network.c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" Workload="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-eth0" Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.779 [INFO][4780] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" HandleID="k8s-pod-network.c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" Workload="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-1-9-52b78fad11", "pod":"calico-apiserver-747559d9d9-fscb7", "timestamp":"2025-11-23 23:02:13.775848855 +0000 UTC"}, Hostname:"ci-4459-2-1-9-52b78fad11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.780 [INFO][4780] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.780 [INFO][4780] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.782 [INFO][4780] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-9-52b78fad11' Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.809 [INFO][4780] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.819 [INFO][4780] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.831 [INFO][4780] ipam/ipam.go 511: Trying affinity for 192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.838 [INFO][4780] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.844 [INFO][4780] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.844 [INFO][4780] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.853 [INFO][4780] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932 Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.862 [INFO][4780] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.873 [INFO][4780] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.8/26] block=192.168.45.0/26 handle="k8s-pod-network.c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.873 [INFO][4780] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.8/26] handle="k8s-pod-network.c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.873 [INFO][4780] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:13.922941 containerd[1525]: 2025-11-23 23:02:13.874 [INFO][4780] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.8/26] IPv6=[] ContainerID="c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" HandleID="k8s-pod-network.c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" Workload="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-eth0" Nov 23 23:02:13.924813 containerd[1525]: 2025-11-23 23:02:13.879 [INFO][4747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-fscb7" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-eth0", GenerateName:"calico-apiserver-747559d9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d85c999-e9d4-4632-99f5-fa0f1c92756a", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747559d9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"", Pod:"calico-apiserver-747559d9d9-fscb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0dbf699efd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:13.924813 containerd[1525]: 2025-11-23 23:02:13.879 [INFO][4747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.8/32] ContainerID="c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-fscb7" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-eth0" Nov 23 23:02:13.924813 containerd[1525]: 2025-11-23 23:02:13.879 [INFO][4747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0dbf699efd ContainerID="c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-fscb7" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-eth0" Nov 23 23:02:13.924813 containerd[1525]: 2025-11-23 23:02:13.891 [INFO][4747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-fscb7" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-eth0" Nov 23 23:02:13.924813 containerd[1525]: 2025-11-23 23:02:13.892 [INFO][4747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-fscb7" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-eth0", GenerateName:"calico-apiserver-747559d9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d85c999-e9d4-4632-99f5-fa0f1c92756a", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747559d9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932", Pod:"calico-apiserver-747559d9d9-fscb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0dbf699efd", MAC:"be:74:8d:d1:ec:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:13.924813 containerd[1525]: 2025-11-23 23:02:13.917 [INFO][4747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-fscb7" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--fscb7-eth0" Nov 23 23:02:13.953652 systemd-networkd[1413]: calice7c340aaef: Gained IPv6LL Nov 23 23:02:13.981470 containerd[1525]: time="2025-11-23T23:02:13.980535818Z" level=info msg="connecting to shim c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932" address="unix:///run/containerd/s/b71671f26383837ebcef1bcb961018851119b2b7b784b9d41f4356c6c7b8d555" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:13.992071 containerd[1525]: time="2025-11-23T23:02:13.991904821Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:13.995472 containerd[1525]: time="2025-11-23T23:02:13.995419206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:02:13.995696 containerd[1525]: time="2025-11-23T23:02:13.995600728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:02:13.996772 kubelet[2782]: E1123 23:02:13.995886 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:13.996772 kubelet[2782]: E1123 23:02:13.995944 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:13.996772 kubelet[2782]: E1123 23:02:13.996058 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf4f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnfrc_calico-system(16769e22-23bd-4950-9cc0-72958bdfa903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:13.997739 kubelet[2782]: E1123 23:02:13.997447 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:02:14.026315 systemd-networkd[1413]: cali8c909dc26bc: Link UP Nov 23 23:02:14.027146 systemd-networkd[1413]: cali8c909dc26bc: Gained carrier Nov 23 23:02:14.044784 systemd[1]: Started cri-containerd-c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932.scope - libcontainer container c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932. Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.713 [INFO][4739] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-eth0 calico-apiserver-747559d9d9- calico-apiserver 5a8e0f96-f5fe-436e-9782-031ed12b446f 838 0 2025-11-23 23:01:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:747559d9d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-1-9-52b78fad11 calico-apiserver-747559d9d9-cwq4m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8c909dc26bc [] [] }} ContainerID="095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-cwq4m" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-" Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.715 [INFO][4739] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-cwq4m" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-eth0" Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.826 [INFO][4785] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" HandleID="k8s-pod-network.095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" Workload="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-eth0" Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.831 [INFO][4785] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" HandleID="k8s-pod-network.095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" Workload="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb4e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-1-9-52b78fad11", "pod":"calico-apiserver-747559d9d9-cwq4m", "timestamp":"2025-11-23 23:02:13.826961985 +0000 UTC"}, Hostname:"ci-4459-2-1-9-52b78fad11", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.831 [INFO][4785] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.874 [INFO][4785] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.874 [INFO][4785] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-9-52b78fad11' Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.919 [INFO][4785] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.937 [INFO][4785] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.950 [INFO][4785] ipam/ipam.go 511: Trying affinity for 192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.962 [INFO][4785] ipam/ipam.go 158: Attempting to load block cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.971 [INFO][4785] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.45.0/26 host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.972 [INFO][4785] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.45.0/26 handle="k8s-pod-network.095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.976 [INFO][4785] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185 Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:13.988 [INFO][4785] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.45.0/26 handle="k8s-pod-network.095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:14.014 [INFO][4785] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.45.9/26] block=192.168.45.0/26 handle="k8s-pod-network.095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:14.014 [INFO][4785] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.45.9/26] handle="k8s-pod-network.095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" host="ci-4459-2-1-9-52b78fad11" Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:14.014 [INFO][4785] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:14.061510 containerd[1525]: 2025-11-23 23:02:14.014 [INFO][4785] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.45.9/26] IPv6=[] ContainerID="095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" HandleID="k8s-pod-network.095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" Workload="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-eth0" Nov 23 23:02:14.062390 containerd[1525]: 2025-11-23 23:02:14.019 [INFO][4739] cni-plugin/k8s.go 418: Populated endpoint ContainerID="095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-cwq4m" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-eth0", GenerateName:"calico-apiserver-747559d9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a8e0f96-f5fe-436e-9782-031ed12b446f", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747559d9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"", Pod:"calico-apiserver-747559d9d9-cwq4m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c909dc26bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:14.062390 containerd[1525]: 2025-11-23 23:02:14.019 [INFO][4739] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.45.9/32] ContainerID="095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-cwq4m" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-eth0" Nov 23 23:02:14.062390 containerd[1525]: 2025-11-23 23:02:14.019 [INFO][4739] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c909dc26bc ContainerID="095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-cwq4m" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-eth0" Nov 23 23:02:14.062390 containerd[1525]: 2025-11-23 23:02:14.027 [INFO][4739] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-cwq4m" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-eth0" Nov 23 23:02:14.062390 containerd[1525]: 2025-11-23 23:02:14.032 [INFO][4739] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-cwq4m" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-eth0", GenerateName:"calico-apiserver-747559d9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"5a8e0f96-f5fe-436e-9782-031ed12b446f", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747559d9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-9-52b78fad11", ContainerID:"095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185", Pod:"calico-apiserver-747559d9d9-cwq4m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8c909dc26bc", MAC:"f6:7d:3b:88:ae:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:14.062390 containerd[1525]: 2025-11-23 23:02:14.054 [INFO][4739] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" Namespace="calico-apiserver" Pod="calico-apiserver-747559d9d9-cwq4m" WorkloadEndpoint="ci--4459--2--1--9--52b78fad11-k8s-calico--apiserver--747559d9d9--cwq4m-eth0" Nov 23 23:02:14.093794 containerd[1525]: time="2025-11-23T23:02:14.093166701Z" level=info msg="connecting to shim 095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185" address="unix:///run/containerd/s/7c38791936d0aeb868923c460c43dbe494fb44817ff39fa08fc6f8c31e7d3ce4" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:14.127771 systemd[1]: Started cri-containerd-095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185.scope - libcontainer container 095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185. Nov 23 23:02:14.175561 containerd[1525]: time="2025-11-23T23:02:14.175423142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747559d9d9-fscb7,Uid:8d85c999-e9d4-4632-99f5-fa0f1c92756a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c40254389e1046a6f8bec7276982e6e095d08350068dd4254e5d9631c2dd4932\"" Nov 23 23:02:14.180516 containerd[1525]: time="2025-11-23T23:02:14.180476389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:14.243375 containerd[1525]: time="2025-11-23T23:02:14.243303050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747559d9d9-cwq4m,Uid:5a8e0f96-f5fe-436e-9782-031ed12b446f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"095b0fa46d563ab44a92a6abba15da8d147f318f16ef89c984c7f402028fc185\"" Nov 23 23:02:14.727146 containerd[1525]: time="2025-11-23T23:02:14.726923605Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:14.728744 containerd[1525]: time="2025-11-23T23:02:14.728627020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:14.729055 containerd[1525]: time="2025-11-23T23:02:14.728919303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:14.729463 kubelet[2782]: E1123 23:02:14.729400 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:14.729578 kubelet[2782]: E1123 23:02:14.729473 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:14.729899 kubelet[2782]: E1123 23:02:14.729764 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc5rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747559d9d9-fscb7_calico-apiserver(8d85c999-e9d4-4632-99f5-fa0f1c92756a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:14.730304 containerd[1525]: time="2025-11-23T23:02:14.730046593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:14.732124 kubelet[2782]: E1123 23:02:14.731685 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:02:14.786308 systemd-networkd[1413]: cali2cd85d065a1: Gained IPv6LL Nov 23 23:02:14.837141 kubelet[2782]: E1123 23:02:14.836814 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:02:14.840923 kubelet[2782]: E1123 23:02:14.840874 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:02:15.064649 containerd[1525]: time="2025-11-23T23:02:15.064598093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:15.066102 containerd[1525]: time="2025-11-23T23:02:15.066004109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:15.066102 containerd[1525]: time="2025-11-23T23:02:15.066056190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:15.066383 kubelet[2782]: E1123 23:02:15.066278 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:15.066460 kubelet[2782]: E1123 23:02:15.066399 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:15.066813 kubelet[2782]: E1123 23:02:15.066621 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5kpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747559d9d9-cwq4m_calico-apiserver(5a8e0f96-f5fe-436e-9782-031ed12b446f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:15.068814 kubelet[2782]: E1123 23:02:15.068756 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:02:15.105695 systemd-networkd[1413]: vxlan.calico: Gained IPv6LL Nov 23 23:02:15.425816 systemd-networkd[1413]: calif0dbf699efd: Gained IPv6LL Nov 23 23:02:15.843230 kubelet[2782]: E1123 23:02:15.843051 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:02:15.845747 kubelet[2782]: E1123 23:02:15.843579 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:02:16.066765 systemd-networkd[1413]: cali8c909dc26bc: Gained IPv6LL Nov 23 23:02:22.562362 containerd[1525]: time="2025-11-23T23:02:22.562299288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:02:22.903557 containerd[1525]: time="2025-11-23T23:02:22.903305790Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:22.905080 containerd[1525]: time="2025-11-23T23:02:22.904949188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:02:22.905080 containerd[1525]: time="2025-11-23T23:02:22.905035630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:22.905705 kubelet[2782]: E1123 23:02:22.905279 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:02:22.905705 kubelet[2782]: E1123 23:02:22.905373 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:02:22.905705 kubelet[2782]: E1123 23:02:22.905626 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsmnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qzvtm_calico-system(147a3fcd-da80-4b14-916a-786fd7363b2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:22.908168 kubelet[2782]: E1123 23:02:22.907284 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:02:24.564146 containerd[1525]: time="2025-11-23T23:02:24.563960208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:02:24.908285 containerd[1525]: time="2025-11-23T23:02:24.907886117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:24.911044 containerd[1525]: time="2025-11-23T23:02:24.910825554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:02:24.911044 containerd[1525]: time="2025-11-23T23:02:24.910900996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:02:24.911672 kubelet[2782]: E1123 23:02:24.911611 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:02:24.913704 kubelet[2782]: E1123 23:02:24.911945 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:02:24.913704 kubelet[2782]: E1123 23:02:24.912243 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64ddx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74998f44b6-zvwmg_calico-system(c7c9f1f0-a20f-4cd1-87de-e2a910e5566a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:24.913811 containerd[1525]: time="2025-11-23T23:02:24.913714429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:02:24.914052 kubelet[2782]: E1123 23:02:24.913994 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:02:25.252864 containerd[1525]: time="2025-11-23T23:02:25.252693644Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:25.254430 containerd[1525]: time="2025-11-23T23:02:25.254358370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:02:25.254598 containerd[1525]: time="2025-11-23T23:02:25.254470133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:02:25.254938 kubelet[2782]: E1123 23:02:25.254865 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:02:25.255078 kubelet[2782]: E1123 23:02:25.255058 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:02:25.255364 kubelet[2782]: E1123 23:02:25.255298 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8713a7dea7e0400190dcc0e99de68523,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k5xzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78fbf9698b-q5ccl_calico-system(186dba02-b9bc-46ba-b1f6-48d2be5bbd68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:25.260454 containerd[1525]: time="2025-11-23T23:02:25.260134649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:02:25.601859 containerd[1525]: time="2025-11-23T23:02:25.601418971Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:25.604832 containerd[1525]: time="2025-11-23T23:02:25.604646939Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:02:25.604832 containerd[1525]: time="2025-11-23T23:02:25.604782543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:02:25.605269 kubelet[2782]: E1123 23:02:25.605215 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:02:25.605350 kubelet[2782]: E1123 23:02:25.605281 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:02:25.605632 kubelet[2782]: E1123 23:02:25.605574 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5xzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78fbf9698b-q5ccl_calico-system(186dba02-b9bc-46ba-b1f6-48d2be5bbd68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:25.606099 containerd[1525]: time="2025-11-23T23:02:25.606047498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:02:25.607054 kubelet[2782]: E1123 23:02:25.606977 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:02:25.936934 containerd[1525]: time="2025-11-23T23:02:25.936594884Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:25.938174 containerd[1525]: time="2025-11-23T23:02:25.938055204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:02:25.938310 containerd[1525]: time="2025-11-23T23:02:25.938165727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:02:25.938451 kubelet[2782]: E1123 23:02:25.938386 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:25.938783 kubelet[2782]: E1123 23:02:25.938466 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:25.938783 kubelet[2782]: E1123 23:02:25.938683 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf4f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnfrc_calico-system(16769e22-23bd-4950-9cc0-72958bdfa903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:25.941624 containerd[1525]: time="2025-11-23T23:02:25.941485259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:02:26.284975 containerd[1525]: time="2025-11-23T23:02:26.284729022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:26.286959 containerd[1525]: time="2025-11-23T23:02:26.286902885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:02:26.287264 containerd[1525]: time="2025-11-23T23:02:26.287063090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:02:26.287532 kubelet[2782]: E1123 23:02:26.287464 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:26.287655 kubelet[2782]: E1123 23:02:26.287634 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:26.287876 kubelet[2782]: E1123 23:02:26.287832 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf4f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnfrc_calico-system(16769e22-23bd-4950-9cc0-72958bdfa903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:26.289301 kubelet[2782]: E1123 23:02:26.289223 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:02:27.563974 containerd[1525]: time="2025-11-23T23:02:27.563558105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:27.914814 containerd[1525]: time="2025-11-23T23:02:27.914558879Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:27.916514 containerd[1525]: time="2025-11-23T23:02:27.916381895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:27.916514 containerd[1525]: time="2025-11-23T23:02:27.916447857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:27.916719 kubelet[2782]: E1123 23:02:27.916651 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:27.916719 kubelet[2782]: E1123 23:02:27.916702 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:27.917249 kubelet[2782]: E1123 23:02:27.916967 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pr9fc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf6b75475-8n9bk_calico-apiserver(319eb40d-b16d-4daa-b6ab-a4e6de765a83): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:27.918074 containerd[1525]: time="2025-11-23T23:02:27.917943342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:27.919004 kubelet[2782]: E1123 23:02:27.918944 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:02:28.250394 containerd[1525]: time="2025-11-23T23:02:28.250306111Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:28.253023 containerd[1525]: time="2025-11-23T23:02:28.252637184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:28.253023 containerd[1525]: time="2025-11-23T23:02:28.252801190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:28.254162 kubelet[2782]: E1123 23:02:28.253509 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:28.254162 kubelet[2782]: E1123 23:02:28.253595 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:28.254162 kubelet[2782]: E1123 23:02:28.253794 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc5rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747559d9d9-fscb7_calico-apiserver(8d85c999-e9d4-4632-99f5-fa0f1c92756a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:28.255033 kubelet[2782]: E1123 23:02:28.254980 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:02:30.563546 containerd[1525]: time="2025-11-23T23:02:30.563162240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:30.907871 containerd[1525]: time="2025-11-23T23:02:30.907183728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:30.909983 containerd[1525]: time="2025-11-23T23:02:30.909870739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:30.909983 containerd[1525]: time="2025-11-23T23:02:30.909960182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:30.910189 kubelet[2782]: E1123 23:02:30.910136 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:30.911070 kubelet[2782]: E1123 23:02:30.910204 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:30.911070 kubelet[2782]: E1123 23:02:30.910391 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5kpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747559d9d9-cwq4m_calico-apiserver(5a8e0f96-f5fe-436e-9782-031ed12b446f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:30.911907 kubelet[2782]: E1123 23:02:30.911866 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:02:38.562600 kubelet[2782]: E1123 23:02:38.562473 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:02:38.563861 kubelet[2782]: E1123 23:02:38.563517 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:02:39.562760 kubelet[2782]: E1123 23:02:39.562656 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:02:40.565233 kubelet[2782]: E1123 23:02:40.565117 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:02:40.566196 kubelet[2782]: E1123 23:02:40.566083 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:02:41.566163 kubelet[2782]: E1123 23:02:41.565669 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:02:41.566595 kubelet[2782]: E1123 23:02:41.566437 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:02:49.564632 containerd[1525]: time="2025-11-23T23:02:49.564476766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:02:49.907064 containerd[1525]: time="2025-11-23T23:02:49.906931153Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:49.909500 containerd[1525]: time="2025-11-23T23:02:49.908524754Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:02:49.909642 containerd[1525]: time="2025-11-23T23:02:49.909497444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:49.910524 kubelet[2782]: E1123 23:02:49.910482 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:02:49.911388 kubelet[2782]: E1123 23:02:49.910965 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:02:49.911388 kubelet[2782]: E1123 23:02:49.911275 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsmnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qzvtm_calico-system(147a3fcd-da80-4b14-916a-786fd7363b2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:49.912940 kubelet[2782]: E1123 23:02:49.912898 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:02:50.564940 containerd[1525]: time="2025-11-23T23:02:50.564255763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:02:50.905701 containerd[1525]: time="2025-11-23T23:02:50.905566750Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:50.907149 containerd[1525]: time="2025-11-23T23:02:50.907012385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:02:50.907361 containerd[1525]: time="2025-11-23T23:02:50.907190514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:02:50.908516 kubelet[2782]: E1123 23:02:50.908472 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:02:50.908644 kubelet[2782]: E1123 23:02:50.908628 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:02:50.909260 kubelet[2782]: E1123 23:02:50.909199 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64ddx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74998f44b6-zvwmg_calico-system(c7c9f1f0-a20f-4cd1-87de-e2a910e5566a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:50.910576 kubelet[2782]: E1123 23:02:50.910523 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:02:51.563877 containerd[1525]: time="2025-11-23T23:02:51.563821497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:51.899321 containerd[1525]: time="2025-11-23T23:02:51.899056138Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:51.900801 containerd[1525]: time="2025-11-23T23:02:51.900736106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:51.900959 containerd[1525]: time="2025-11-23T23:02:51.900765747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:51.901607 kubelet[2782]: E1123 23:02:51.901555 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:51.901701 kubelet[2782]: E1123 23:02:51.901624 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:51.901971 kubelet[2782]: E1123 23:02:51.901910 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pr9fc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf6b75475-8n9bk_calico-apiserver(319eb40d-b16d-4daa-b6ab-a4e6de765a83): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:51.903498 kubelet[2782]: E1123 23:02:51.903420 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:02:51.903826 containerd[1525]: time="2025-11-23T23:02:51.903757223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:02:52.256162 containerd[1525]: time="2025-11-23T23:02:52.256078788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:52.257628 containerd[1525]: time="2025-11-23T23:02:52.257562786Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:02:52.257776 containerd[1525]: time="2025-11-23T23:02:52.257597748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:02:52.257914 kubelet[2782]: E1123 23:02:52.257865 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:52.258195 kubelet[2782]: E1123 23:02:52.257934 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:52.260768 kubelet[2782]: E1123 23:02:52.260703 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf4f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnfrc_calico-system(16769e22-23bd-4950-9cc0-72958bdfa903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:52.263341 containerd[1525]: time="2025-11-23T23:02:52.263289208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:02:52.614102 containerd[1525]: time="2025-11-23T23:02:52.613953784Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:52.616566 containerd[1525]: time="2025-11-23T23:02:52.616179542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:02:52.616566 containerd[1525]: time="2025-11-23T23:02:52.616271666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:02:52.616796 kubelet[2782]: E1123 23:02:52.616550 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:52.616796 kubelet[2782]: E1123 23:02:52.616626 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:52.617017 kubelet[2782]: E1123 23:02:52.616917 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf4f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnfrc_calico-system(16769e22-23bd-4950-9cc0-72958bdfa903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:52.618812 containerd[1525]: time="2025-11-23T23:02:52.618662113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:02:52.619230 kubelet[2782]: E1123 23:02:52.619024 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:02:52.955938 containerd[1525]: time="2025-11-23T23:02:52.955083577Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:52.957373 containerd[1525]: time="2025-11-23T23:02:52.957109324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:02:52.957373 containerd[1525]: time="2025-11-23T23:02:52.957202329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:02:52.958735 kubelet[2782]: E1123 23:02:52.957607 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:02:52.958735 kubelet[2782]: E1123 23:02:52.957680 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:02:52.958735 kubelet[2782]: E1123 23:02:52.957846 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8713a7dea7e0400190dcc0e99de68523,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k5xzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78fbf9698b-q5ccl_calico-system(186dba02-b9bc-46ba-b1f6-48d2be5bbd68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:52.961787 containerd[1525]: time="2025-11-23T23:02:52.961583840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:02:53.286031 containerd[1525]: time="2025-11-23T23:02:53.285828868Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:53.287604 containerd[1525]: time="2025-11-23T23:02:53.287429073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:02:53.287604 containerd[1525]: time="2025-11-23T23:02:53.287446834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:02:53.287950 kubelet[2782]: E1123 23:02:53.287907 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:02:53.288286 kubelet[2782]: E1123 23:02:53.287968 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:02:53.288286 kubelet[2782]: E1123 23:02:53.288086 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5xzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78fbf9698b-q5ccl_calico-system(186dba02-b9bc-46ba-b1f6-48d2be5bbd68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:53.289564 kubelet[2782]: E1123 23:02:53.289462 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:02:55.566519 containerd[1525]: time="2025-11-23T23:02:55.566470873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:55.910830 containerd[1525]: time="2025-11-23T23:02:55.910680609Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:55.913748 containerd[1525]: time="2025-11-23T23:02:55.913693093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:55.913851 containerd[1525]: time="2025-11-23T23:02:55.913794578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:55.914019 kubelet[2782]: E1123 23:02:55.913980 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:55.914444 kubelet[2782]: E1123 23:02:55.914036 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:55.914444 kubelet[2782]: E1123 23:02:55.914244 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc5rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747559d9d9-fscb7_calico-apiserver(8d85c999-e9d4-4632-99f5-fa0f1c92756a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:55.915512 kubelet[2782]: E1123 23:02:55.915471 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:02:55.915673 containerd[1525]: time="2025-11-23T23:02:55.915639799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:56.255753 containerd[1525]: time="2025-11-23T23:02:56.255706124Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:56.257205 containerd[1525]: time="2025-11-23T23:02:56.257131162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:56.257270 containerd[1525]: time="2025-11-23T23:02:56.257232408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:56.257696 kubelet[2782]: E1123 23:02:56.257424 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:56.257696 kubelet[2782]: E1123 23:02:56.257487 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:56.257696 kubelet[2782]: E1123 23:02:56.257628 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5kpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747559d9d9-cwq4m_calico-apiserver(5a8e0f96-f5fe-436e-9782-031ed12b446f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:56.259718 kubelet[2782]: E1123 23:02:56.259573 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:03:03.564467 kubelet[2782]: E1123 23:03:03.563852 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:03:04.563971 kubelet[2782]: E1123 23:03:04.563526 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:03:05.574353 kubelet[2782]: E1123 23:03:05.573975 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:03:05.580694 kubelet[2782]: E1123 23:03:05.579007 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:03:06.563039 kubelet[2782]: E1123 23:03:06.562918 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:03:07.563146 kubelet[2782]: E1123 23:03:07.562779 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:03:08.563390 kubelet[2782]: E1123 23:03:08.563141 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:03:17.564348 kubelet[2782]: E1123 23:03:17.563836 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:03:17.567586 kubelet[2782]: E1123 23:03:17.567497 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:03:18.563206 kubelet[2782]: E1123 23:03:18.562761 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:03:20.563462 kubelet[2782]: E1123 23:03:20.563396 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:03:21.566715 kubelet[2782]: E1123 23:03:21.566534 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:03:22.562841 kubelet[2782]: E1123 23:03:22.562312 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:03:23.563517 kubelet[2782]: E1123 23:03:23.562706 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:03:28.564018 kubelet[2782]: E1123 23:03:28.563960 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:03:28.565079 kubelet[2782]: E1123 23:03:28.564610 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:03:32.562501 containerd[1525]: time="2025-11-23T23:03:32.562371279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:03:32.901499 containerd[1525]: time="2025-11-23T23:03:32.900917218Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:32.902755 containerd[1525]: time="2025-11-23T23:03:32.902656333Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:03:32.902862 containerd[1525]: time="2025-11-23T23:03:32.902804703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:03:32.903191 kubelet[2782]: E1123 23:03:32.903086 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:03:32.903191 kubelet[2782]: E1123 23:03:32.903165 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:03:32.903863 kubelet[2782]: E1123 23:03:32.903745 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64ddx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74998f44b6-zvwmg_calico-system(c7c9f1f0-a20f-4cd1-87de-e2a910e5566a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:32.905261 kubelet[2782]: E1123 23:03:32.905200 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:03:33.564364 containerd[1525]: time="2025-11-23T23:03:33.564061305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:03:33.907410 containerd[1525]: time="2025-11-23T23:03:33.907031272Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:33.908571 containerd[1525]: time="2025-11-23T23:03:33.908520971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:03:33.908676 containerd[1525]: time="2025-11-23T23:03:33.908613697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:03:33.909354 kubelet[2782]: E1123 23:03:33.908884 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:03:33.910372 kubelet[2782]: E1123 23:03:33.909491 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:03:33.910420 containerd[1525]: time="2025-11-23T23:03:33.909839058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:03:33.910902 kubelet[2782]: E1123 23:03:33.910828 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf4f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnfrc_calico-system(16769e22-23bd-4950-9cc0-72958bdfa903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:34.239404 containerd[1525]: time="2025-11-23T23:03:34.238650445Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:34.240704 containerd[1525]: time="2025-11-23T23:03:34.240581893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:03:34.241002 containerd[1525]: time="2025-11-23T23:03:34.240643937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:03:34.242281 kubelet[2782]: E1123 23:03:34.241183 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:03:34.242281 kubelet[2782]: E1123 23:03:34.241381 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:03:34.242281 kubelet[2782]: E1123 23:03:34.241742 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8713a7dea7e0400190dcc0e99de68523,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k5xzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78fbf9698b-q5ccl_calico-system(186dba02-b9bc-46ba-b1f6-48d2be5bbd68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:34.242777 containerd[1525]: time="2025-11-23T23:03:34.242690753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:03:34.562711 kubelet[2782]: E1123 23:03:34.562310 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:03:34.575633 containerd[1525]: time="2025-11-23T23:03:34.575445616Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:34.577252 containerd[1525]: time="2025-11-23T23:03:34.577068763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:03:34.577252 containerd[1525]: time="2025-11-23T23:03:34.577187091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:03:34.577981 kubelet[2782]: E1123 23:03:34.577399 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:03:34.577981 kubelet[2782]: E1123 23:03:34.577453 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:03:34.577981 kubelet[2782]: E1123 23:03:34.577642 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf4f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnfrc_calico-system(16769e22-23bd-4950-9cc0-72958bdfa903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:34.578446 containerd[1525]: time="2025-11-23T23:03:34.578422053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:03:34.579392 kubelet[2782]: E1123 23:03:34.579251 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:03:34.921100 containerd[1525]: time="2025-11-23T23:03:34.920868320Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:34.922865 containerd[1525]: time="2025-11-23T23:03:34.922728763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:03:34.922865 containerd[1525]: time="2025-11-23T23:03:34.922794488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:03:34.923019 kubelet[2782]: E1123 23:03:34.922981 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:03:34.924381 kubelet[2782]: E1123 23:03:34.923027 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:03:34.924381 kubelet[2782]: E1123 23:03:34.923158 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5xzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78fbf9698b-q5ccl_calico-system(186dba02-b9bc-46ba-b1f6-48d2be5bbd68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:34.924658 kubelet[2782]: E1123 23:03:34.924498 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:03:36.564295 containerd[1525]: time="2025-11-23T23:03:36.564182077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:03:36.913315 containerd[1525]: time="2025-11-23T23:03:36.913072236Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:36.915013 containerd[1525]: time="2025-11-23T23:03:36.914871956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:03:36.915013 containerd[1525]: time="2025-11-23T23:03:36.914980203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:03:36.915294 kubelet[2782]: E1123 23:03:36.915227 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:36.915622 kubelet[2782]: E1123 23:03:36.915309 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:36.915622 kubelet[2782]: E1123 23:03:36.915537 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5kpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747559d9d9-cwq4m_calico-apiserver(5a8e0f96-f5fe-436e-9782-031ed12b446f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:36.918338 kubelet[2782]: E1123 23:03:36.917287 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:03:42.564287 containerd[1525]: time="2025-11-23T23:03:42.564143135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:03:42.903178 containerd[1525]: time="2025-11-23T23:03:42.902822483Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:42.904536 containerd[1525]: time="2025-11-23T23:03:42.904034645Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:03:42.904774 containerd[1525]: time="2025-11-23T23:03:42.904521038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:03:42.904938 kubelet[2782]: E1123 23:03:42.904889 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:03:42.905252 kubelet[2782]: E1123 23:03:42.904949 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:03:42.905252 kubelet[2782]: E1123 23:03:42.905086 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsmnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qzvtm_calico-system(147a3fcd-da80-4b14-916a-786fd7363b2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:42.906386 kubelet[2782]: E1123 23:03:42.906321 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:03:43.565057 kubelet[2782]: E1123 23:03:43.564980 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:03:43.566955 containerd[1525]: time="2025-11-23T23:03:43.566757780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:03:43.911118 containerd[1525]: time="2025-11-23T23:03:43.910939180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:43.912847 containerd[1525]: time="2025-11-23T23:03:43.912709220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:03:43.912847 containerd[1525]: time="2025-11-23T23:03:43.912751783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:03:43.913187 kubelet[2782]: E1123 23:03:43.912973 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:43.913187 kubelet[2782]: E1123 23:03:43.913024 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:43.913187 kubelet[2782]: E1123 23:03:43.913156 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pr9fc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf6b75475-8n9bk_calico-apiserver(319eb40d-b16d-4daa-b6ab-a4e6de765a83): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:43.914438 kubelet[2782]: E1123 23:03:43.914392 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:03:45.567635 containerd[1525]: time="2025-11-23T23:03:45.567487512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:03:45.905701 containerd[1525]: time="2025-11-23T23:03:45.905252995Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:45.907111 containerd[1525]: time="2025-11-23T23:03:45.906937709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:03:45.907111 containerd[1525]: time="2025-11-23T23:03:45.906958551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:03:45.907563 kubelet[2782]: E1123 23:03:45.907452 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:45.908404 kubelet[2782]: E1123 23:03:45.908062 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:45.908404 kubelet[2782]: E1123 23:03:45.908311 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc5rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747559d9d9-fscb7_calico-apiserver(8d85c999-e9d4-4632-99f5-fa0f1c92756a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:45.909782 kubelet[2782]: E1123 23:03:45.909656 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:03:47.568597 kubelet[2782]: E1123 23:03:47.568496 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:03:48.564187 kubelet[2782]: E1123 23:03:48.564126 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:03:48.565140 kubelet[2782]: E1123 23:03:48.564883 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:03:55.563360 kubelet[2782]: E1123 23:03:55.563248 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:03:55.565554 kubelet[2782]: E1123 23:03:55.565501 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:03:57.563414 kubelet[2782]: E1123 23:03:57.562994 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:04:00.471526 systemd[1]: Started sshd@7-159.69.184.20:22-139.178.68.195:35308.service - OpenSSH per-connection server daemon (139.178.68.195:35308). Nov 23 23:04:00.562878 kubelet[2782]: E1123 23:04:00.562643 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:04:01.463202 sshd[5111]: Accepted publickey for core from 139.178.68.195 port 35308 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:01.466559 sshd-session[5111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:01.474477 systemd-logind[1496]: New session 8 of user core. Nov 23 23:04:01.480322 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 23:04:01.568190 kubelet[2782]: E1123 23:04:01.568129 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:04:02.290035 sshd[5116]: Connection closed by 139.178.68.195 port 35308 Nov 23 23:04:02.290720 sshd-session[5111]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:02.299133 systemd[1]: sshd@7-159.69.184.20:22-139.178.68.195:35308.service: Deactivated successfully. Nov 23 23:04:02.302719 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 23:04:02.304603 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Nov 23 23:04:02.307475 systemd-logind[1496]: Removed session 8. Nov 23 23:04:02.564486 kubelet[2782]: E1123 23:04:02.564123 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:04:02.566121 kubelet[2782]: E1123 23:04:02.566060 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:04:06.562449 kubelet[2782]: E1123 23:04:06.561732 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:04:07.462700 systemd[1]: Started sshd@8-159.69.184.20:22-139.178.68.195:35324.service - OpenSSH per-connection server daemon (139.178.68.195:35324). Nov 23 23:04:08.432748 sshd[5129]: Accepted publickey for core from 139.178.68.195 port 35324 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:08.435617 sshd-session[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:08.441548 systemd-logind[1496]: New session 9 of user core. Nov 23 23:04:08.446569 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 23:04:08.563537 kubelet[2782]: E1123 23:04:08.563458 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:04:09.177993 sshd[5132]: Connection closed by 139.178.68.195 port 35324 Nov 23 23:04:09.180564 sshd-session[5129]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:09.187830 systemd[1]: sshd@8-159.69.184.20:22-139.178.68.195:35324.service: Deactivated successfully. Nov 23 23:04:09.190035 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 23:04:09.192062 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Nov 23 23:04:09.195558 systemd-logind[1496]: Removed session 9. Nov 23 23:04:09.352660 systemd[1]: Started sshd@9-159.69.184.20:22-139.178.68.195:35328.service - OpenSSH per-connection server daemon (139.178.68.195:35328). Nov 23 23:04:10.351363 sshd[5168]: Accepted publickey for core from 139.178.68.195 port 35328 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:10.353570 sshd-session[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:10.359981 systemd-logind[1496]: New session 10 of user core. Nov 23 23:04:10.367563 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 23:04:10.564474 kubelet[2782]: E1123 23:04:10.564421 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:04:11.178863 sshd[5171]: Connection closed by 139.178.68.195 port 35328 Nov 23 23:04:11.179274 sshd-session[5168]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:11.191636 systemd[1]: sshd@9-159.69.184.20:22-139.178.68.195:35328.service: Deactivated successfully. Nov 23 23:04:11.195165 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 23:04:11.198820 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Nov 23 23:04:11.201720 systemd-logind[1496]: Removed session 10. Nov 23 23:04:11.344901 systemd[1]: Started sshd@10-159.69.184.20:22-139.178.68.195:60774.service - OpenSSH per-connection server daemon (139.178.68.195:60774). Nov 23 23:04:12.314640 sshd[5181]: Accepted publickey for core from 139.178.68.195 port 60774 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:12.319808 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:12.333565 systemd-logind[1496]: New session 11 of user core. Nov 23 23:04:12.336640 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 23:04:13.086016 sshd[5184]: Connection closed by 139.178.68.195 port 60774 Nov 23 23:04:13.087177 sshd-session[5181]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:13.095139 systemd[1]: sshd@10-159.69.184.20:22-139.178.68.195:60774.service: Deactivated successfully. Nov 23 23:04:13.098530 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 23:04:13.102610 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Nov 23 23:04:13.104412 systemd-logind[1496]: Removed session 11. Nov 23 23:04:13.565470 kubelet[2782]: E1123 23:04:13.565182 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:04:13.565840 kubelet[2782]: E1123 23:04:13.565597 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:04:13.566609 kubelet[2782]: E1123 23:04:13.566492 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:04:16.566702 kubelet[2782]: E1123 23:04:16.566631 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:04:17.564378 kubelet[2782]: E1123 23:04:17.563507 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:04:18.258809 systemd[1]: Started sshd@11-159.69.184.20:22-139.178.68.195:60784.service - OpenSSH per-connection server daemon (139.178.68.195:60784). Nov 23 23:04:19.246182 sshd[5201]: Accepted publickey for core from 139.178.68.195 port 60784 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:19.248194 sshd-session[5201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:19.254531 systemd-logind[1496]: New session 12 of user core. Nov 23 23:04:19.263026 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 23:04:20.012591 sshd[5204]: Connection closed by 139.178.68.195 port 60784 Nov 23 23:04:20.015861 sshd-session[5201]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:20.020892 systemd[1]: sshd@11-159.69.184.20:22-139.178.68.195:60784.service: Deactivated successfully. Nov 23 23:04:20.025568 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 23:04:20.027835 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Nov 23 23:04:20.030566 systemd-logind[1496]: Removed session 12. Nov 23 23:04:23.565068 kubelet[2782]: E1123 23:04:23.564865 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:04:24.563773 kubelet[2782]: E1123 23:04:24.563474 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:04:25.181854 systemd[1]: Started sshd@12-159.69.184.20:22-139.178.68.195:60892.service - OpenSSH per-connection server daemon (139.178.68.195:60892). Nov 23 23:04:26.175383 sshd[5218]: Accepted publickey for core from 139.178.68.195 port 60892 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:26.177183 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:26.184140 systemd-logind[1496]: New session 13 of user core. Nov 23 23:04:26.189784 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 23:04:26.954362 sshd[5221]: Connection closed by 139.178.68.195 port 60892 Nov 23 23:04:26.954247 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:26.959046 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Nov 23 23:04:26.960270 systemd[1]: sshd@12-159.69.184.20:22-139.178.68.195:60892.service: Deactivated successfully. Nov 23 23:04:26.963700 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 23:04:26.970481 systemd-logind[1496]: Removed session 13. Nov 23 23:04:27.564080 kubelet[2782]: E1123 23:04:27.563453 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:04:27.566915 kubelet[2782]: E1123 23:04:27.566847 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:04:28.563713 kubelet[2782]: E1123 23:04:28.563661 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:04:28.564753 kubelet[2782]: E1123 23:04:28.564604 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:04:29.566589 kubelet[2782]: E1123 23:04:29.566458 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:04:32.128900 systemd[1]: Started sshd@13-159.69.184.20:22-139.178.68.195:46386.service - OpenSSH per-connection server daemon (139.178.68.195:46386). Nov 23 23:04:33.117781 sshd[5235]: Accepted publickey for core from 139.178.68.195 port 46386 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:33.120633 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:33.128265 systemd-logind[1496]: New session 14 of user core. Nov 23 23:04:33.135716 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 23:04:33.886324 sshd[5238]: Connection closed by 139.178.68.195 port 46386 Nov 23 23:04:33.886721 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:33.895164 systemd[1]: sshd@13-159.69.184.20:22-139.178.68.195:46386.service: Deactivated successfully. Nov 23 23:04:33.900256 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 23:04:33.904914 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Nov 23 23:04:33.907700 systemd-logind[1496]: Removed session 14. Nov 23 23:04:34.060803 systemd[1]: Started sshd@14-159.69.184.20:22-139.178.68.195:46392.service - OpenSSH per-connection server daemon (139.178.68.195:46392). Nov 23 23:04:35.060554 sshd[5250]: Accepted publickey for core from 139.178.68.195 port 46392 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:35.062564 sshd-session[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:35.069238 systemd-logind[1496]: New session 15 of user core. Nov 23 23:04:35.075605 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 23:04:35.969676 sshd[5253]: Connection closed by 139.178.68.195 port 46392 Nov 23 23:04:35.970548 sshd-session[5250]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:35.975180 systemd[1]: sshd@14-159.69.184.20:22-139.178.68.195:46392.service: Deactivated successfully. Nov 23 23:04:35.980100 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 23:04:35.983129 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Nov 23 23:04:35.985814 systemd-logind[1496]: Removed session 15. Nov 23 23:04:36.140239 systemd[1]: Started sshd@15-159.69.184.20:22-139.178.68.195:46402.service - OpenSSH per-connection server daemon (139.178.68.195:46402). Nov 23 23:04:36.562914 kubelet[2782]: E1123 23:04:36.562865 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:04:36.564035 kubelet[2782]: E1123 23:04:36.563594 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:04:37.121670 sshd[5263]: Accepted publickey for core from 139.178.68.195 port 46402 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:37.123361 sshd-session[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:37.128490 systemd-logind[1496]: New session 16 of user core. Nov 23 23:04:37.131545 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 23:04:38.562810 kubelet[2782]: E1123 23:04:38.562753 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:04:38.648803 sshd[5266]: Connection closed by 139.178.68.195 port 46402 Nov 23 23:04:38.648684 sshd-session[5263]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:38.658570 systemd[1]: sshd@15-159.69.184.20:22-139.178.68.195:46402.service: Deactivated successfully. Nov 23 23:04:38.663498 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 23:04:38.666892 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Nov 23 23:04:38.670502 systemd-logind[1496]: Removed session 16. Nov 23 23:04:38.822578 systemd[1]: Started sshd@16-159.69.184.20:22-139.178.68.195:46412.service - OpenSSH per-connection server daemon (139.178.68.195:46412). Nov 23 23:04:39.831651 sshd[5311]: Accepted publickey for core from 139.178.68.195 port 46412 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:39.833422 sshd-session[5311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:39.838626 systemd-logind[1496]: New session 17 of user core. Nov 23 23:04:39.845849 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 23:04:40.727987 sshd[5314]: Connection closed by 139.178.68.195 port 46412 Nov 23 23:04:40.728795 sshd-session[5311]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:40.737530 systemd[1]: sshd@16-159.69.184.20:22-139.178.68.195:46412.service: Deactivated successfully. Nov 23 23:04:40.739822 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Nov 23 23:04:40.740815 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 23:04:40.744796 systemd-logind[1496]: Removed session 17. Nov 23 23:04:40.900585 systemd[1]: Started sshd@17-159.69.184.20:22-139.178.68.195:36040.service - OpenSSH per-connection server daemon (139.178.68.195:36040). Nov 23 23:04:41.565820 kubelet[2782]: E1123 23:04:41.565648 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:04:41.567895 kubelet[2782]: E1123 23:04:41.566559 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:04:41.888788 sshd[5324]: Accepted publickey for core from 139.178.68.195 port 36040 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:41.889926 sshd-session[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:41.897824 systemd-logind[1496]: New session 18 of user core. Nov 23 23:04:41.906664 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 23:04:42.646254 sshd[5327]: Connection closed by 139.178.68.195 port 36040 Nov 23 23:04:42.648806 sshd-session[5324]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:42.654662 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Nov 23 23:04:42.655056 systemd[1]: sshd@17-159.69.184.20:22-139.178.68.195:36040.service: Deactivated successfully. Nov 23 23:04:42.659600 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 23:04:42.663578 systemd-logind[1496]: Removed session 18. Nov 23 23:04:43.564188 kubelet[2782]: E1123 23:04:43.564023 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:04:43.565989 kubelet[2782]: E1123 23:04:43.565600 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:04:47.841597 systemd[1]: Started sshd@18-159.69.184.20:22-139.178.68.195:36044.service - OpenSSH per-connection server daemon (139.178.68.195:36044). Nov 23 23:04:48.908281 sshd[5341]: Accepted publickey for core from 139.178.68.195 port 36044 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:48.911361 sshd-session[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:48.917517 systemd-logind[1496]: New session 19 of user core. Nov 23 23:04:48.923585 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 23:04:49.563390 kubelet[2782]: E1123 23:04:49.563024 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:04:49.745474 sshd[5344]: Connection closed by 139.178.68.195 port 36044 Nov 23 23:04:49.747746 sshd-session[5341]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:49.757250 systemd[1]: sshd@18-159.69.184.20:22-139.178.68.195:36044.service: Deactivated successfully. Nov 23 23:04:49.760960 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 23:04:49.764966 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Nov 23 23:04:49.766180 systemd-logind[1496]: Removed session 19. Nov 23 23:04:50.562424 kubelet[2782]: E1123 23:04:50.562006 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:04:51.562373 kubelet[2782]: E1123 23:04:51.562072 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:04:52.562454 kubelet[2782]: E1123 23:04:52.562379 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:04:54.898615 systemd[1]: Started sshd@19-159.69.184.20:22-139.178.68.195:54092.service - OpenSSH per-connection server daemon (139.178.68.195:54092). Nov 23 23:04:55.566511 containerd[1525]: time="2025-11-23T23:04:55.565671866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:04:55.872478 sshd[5363]: Accepted publickey for core from 139.178.68.195 port 54092 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:55.875389 sshd-session[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:55.881605 systemd-logind[1496]: New session 20 of user core. Nov 23 23:04:55.886595 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 23 23:04:55.917260 containerd[1525]: time="2025-11-23T23:04:55.917165335Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:55.918814 containerd[1525]: time="2025-11-23T23:04:55.918758099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:04:55.918901 containerd[1525]: time="2025-11-23T23:04:55.918861340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:04:55.919322 kubelet[2782]: E1123 23:04:55.919276 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:04:55.919638 kubelet[2782]: E1123 23:04:55.919341 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:04:55.920625 containerd[1525]: time="2025-11-23T23:04:55.920410984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:04:55.923496 kubelet[2782]: E1123 23:04:55.922976 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8713a7dea7e0400190dcc0e99de68523,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k5xzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78fbf9698b-q5ccl_calico-system(186dba02-b9bc-46ba-b1f6-48d2be5bbd68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:56.253748 containerd[1525]: time="2025-11-23T23:04:56.253673085Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:56.255335 containerd[1525]: time="2025-11-23T23:04:56.255259411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:04:56.256427 containerd[1525]: time="2025-11-23T23:04:56.255300971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:04:56.256685 kubelet[2782]: E1123 23:04:56.256642 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:04:56.256891 kubelet[2782]: E1123 23:04:56.256870 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:04:56.257419 kubelet[2782]: E1123 23:04:56.257163 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64ddx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74998f44b6-zvwmg_calico-system(c7c9f1f0-a20f-4cd1-87de-e2a910e5566a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:56.258527 containerd[1525]: time="2025-11-23T23:04:56.257309579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:04:56.258619 kubelet[2782]: E1123 23:04:56.258532 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:04:56.594679 containerd[1525]: time="2025-11-23T23:04:56.594059159Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:04:56.596941 containerd[1525]: time="2025-11-23T23:04:56.596804769Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:04:56.596941 containerd[1525]: time="2025-11-23T23:04:56.596905929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:04:56.597115 kubelet[2782]: E1123 23:04:56.597078 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:04:56.597165 kubelet[2782]: E1123 23:04:56.597129 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:04:56.597302 kubelet[2782]: E1123 23:04:56.597239 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5xzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78fbf9698b-q5ccl_calico-system(186dba02-b9bc-46ba-b1f6-48d2be5bbd68): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:04:56.598610 kubelet[2782]: E1123 23:04:56.598419 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:04:56.611828 sshd[5366]: Connection closed by 139.178.68.195 port 54092 Nov 23 23:04:56.612810 sshd-session[5363]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:56.618366 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Nov 23 23:04:56.618536 systemd[1]: sshd@19-159.69.184.20:22-139.178.68.195:54092.service: Deactivated successfully. Nov 23 23:04:56.621811 systemd[1]: session-20.scope: Deactivated successfully. Nov 23 23:04:56.625595 systemd-logind[1496]: Removed session 20. Nov 23 23:04:57.562275 kubelet[2782]: E1123 23:04:57.561748 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:05:03.566805 containerd[1525]: time="2025-11-23T23:05:03.566762124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:05:03.905274 containerd[1525]: time="2025-11-23T23:05:03.905094083Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:03.907030 containerd[1525]: time="2025-11-23T23:05:03.906927659Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:05:03.907271 containerd[1525]: time="2025-11-23T23:05:03.906944060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:05:03.907596 kubelet[2782]: E1123 23:05:03.907538 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:05:03.908493 kubelet[2782]: E1123 23:05:03.908117 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:05:03.908918 kubelet[2782]: E1123 23:05:03.908806 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5kpz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747559d9d9-cwq4m_calico-apiserver(5a8e0f96-f5fe-436e-9782-031ed12b446f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:03.910488 kubelet[2782]: E1123 23:05:03.910379 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f" Nov 23 23:05:05.563817 containerd[1525]: time="2025-11-23T23:05:05.563288690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:05:06.099709 containerd[1525]: time="2025-11-23T23:05:06.099545803Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:06.101249 containerd[1525]: time="2025-11-23T23:05:06.101180941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:05:06.101642 kubelet[2782]: E1123 23:05:06.101580 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:05:06.102204 kubelet[2782]: E1123 23:05:06.101656 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:05:06.102276 containerd[1525]: time="2025-11-23T23:05:06.101324663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:05:06.102452 kubelet[2782]: E1123 23:05:06.102345 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf4f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnfrc_calico-system(16769e22-23bd-4950-9cc0-72958bdfa903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:06.103455 containerd[1525]: time="2025-11-23T23:05:06.102750479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:05:06.443296 containerd[1525]: time="2025-11-23T23:05:06.442573732Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:06.444630 containerd[1525]: time="2025-11-23T23:05:06.444515073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:05:06.444824 containerd[1525]: time="2025-11-23T23:05:06.444634515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:05:06.444894 kubelet[2782]: E1123 23:05:06.444833 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:05:06.445037 kubelet[2782]: E1123 23:05:06.444896 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:05:06.445521 kubelet[2782]: E1123 23:05:06.445245 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pr9fc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6bf6b75475-8n9bk_calico-apiserver(319eb40d-b16d-4daa-b6ab-a4e6de765a83): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:06.445974 containerd[1525]: time="2025-11-23T23:05:06.445719087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:05:06.446673 kubelet[2782]: E1123 23:05:06.446625 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6bf6b75475-8n9bk" podUID="319eb40d-b16d-4daa-b6ab-a4e6de765a83" Nov 23 23:05:06.795518 containerd[1525]: time="2025-11-23T23:05:06.795462531Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:06.797279 containerd[1525]: time="2025-11-23T23:05:06.797155350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:05:06.797279 containerd[1525]: time="2025-11-23T23:05:06.797219871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:05:06.797527 kubelet[2782]: E1123 23:05:06.797421 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:05:06.797527 kubelet[2782]: E1123 23:05:06.797496 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:05:06.797940 kubelet[2782]: E1123 23:05:06.797818 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gf4f6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnfrc_calico-system(16769e22-23bd-4950-9cc0-72958bdfa903): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:06.798101 containerd[1525]: time="2025-11-23T23:05:06.797892879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:05:06.799300 kubelet[2782]: E1123 23:05:06.799246 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnfrc" podUID="16769e22-23bd-4950-9cc0-72958bdfa903" Nov 23 23:05:07.155472 containerd[1525]: time="2025-11-23T23:05:07.155045471Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:07.157484 containerd[1525]: time="2025-11-23T23:05:07.157302698Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:05:07.157484 containerd[1525]: time="2025-11-23T23:05:07.157363459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:05:07.157744 kubelet[2782]: E1123 23:05:07.157683 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:05:07.158040 kubelet[2782]: E1123 23:05:07.157762 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:05:07.158087 kubelet[2782]: E1123 23:05:07.157974 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsmnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qzvtm_calico-system(147a3fcd-da80-4b14-916a-786fd7363b2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:07.159359 kubelet[2782]: E1123 23:05:07.159281 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qzvtm" podUID="147a3fcd-da80-4b14-916a-786fd7363b2a" Nov 23 23:05:09.567884 kubelet[2782]: E1123 23:05:09.567829 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" Nov 23 23:05:09.568678 containerd[1525]: time="2025-11-23T23:05:09.568648639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:05:09.568986 kubelet[2782]: E1123 23:05:09.568808 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78fbf9698b-q5ccl" podUID="186dba02-b9bc-46ba-b1f6-48d2be5bbd68" Nov 23 23:05:09.901199 containerd[1525]: time="2025-11-23T23:05:09.900943520Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:09.903812 containerd[1525]: time="2025-11-23T23:05:09.903629236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:05:09.903812 containerd[1525]: time="2025-11-23T23:05:09.903690996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:05:09.904469 kubelet[2782]: E1123 23:05:09.904409 2782 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:05:09.904597 kubelet[2782]: E1123 23:05:09.904579 2782 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:05:09.904984 kubelet[2782]: E1123 23:05:09.904933 2782 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc5rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747559d9d9-fscb7_calico-apiserver(8d85c999-e9d4-4632-99f5-fa0f1c92756a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:09.906346 kubelet[2782]: E1123 23:05:09.906279 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-fscb7" podUID="8d85c999-e9d4-4632-99f5-fa0f1c92756a" Nov 23 23:05:12.446908 systemd[1]: cri-containerd-46ad6a4660aa5d0b7b7f28e59100e80954756f03500f16025864e3ae6cef15be.scope: Deactivated successfully. Nov 23 23:05:12.447839 systemd[1]: cri-containerd-46ad6a4660aa5d0b7b7f28e59100e80954756f03500f16025864e3ae6cef15be.scope: Consumed 41.324s CPU time, 102.9M memory peak. Nov 23 23:05:12.451431 containerd[1525]: time="2025-11-23T23:05:12.451241835Z" level=info msg="received container exit event container_id:\"46ad6a4660aa5d0b7b7f28e59100e80954756f03500f16025864e3ae6cef15be\" id:\"46ad6a4660aa5d0b7b7f28e59100e80954756f03500f16025864e3ae6cef15be\" pid:3113 exit_status:1 exited_at:{seconds:1763939112 nanos:450176099}" Nov 23 23:05:12.479659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46ad6a4660aa5d0b7b7f28e59100e80954756f03500f16025864e3ae6cef15be-rootfs.mount: Deactivated successfully. Nov 23 23:05:12.900807 kubelet[2782]: E1123 23:05:12.900739 2782 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44542->10.0.0.2:2379: read: connection timed out" Nov 23 23:05:12.922013 systemd[1]: cri-containerd-dae18f5d8d6fe6160dbd3da7036b0f294cad3611872fea540783297f0a68ff3d.scope: Deactivated successfully. Nov 23 23:05:12.923264 systemd[1]: cri-containerd-dae18f5d8d6fe6160dbd3da7036b0f294cad3611872fea540783297f0a68ff3d.scope: Consumed 4.248s CPU time, 64M memory peak, 3M read from disk. Nov 23 23:05:12.926220 containerd[1525]: time="2025-11-23T23:05:12.926019120Z" level=info msg="received container exit event container_id:\"dae18f5d8d6fe6160dbd3da7036b0f294cad3611872fea540783297f0a68ff3d\" id:\"dae18f5d8d6fe6160dbd3da7036b0f294cad3611872fea540783297f0a68ff3d\" pid:2608 exit_status:1 exited_at:{seconds:1763939112 nanos:925631994}" Nov 23 23:05:12.954133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dae18f5d8d6fe6160dbd3da7036b0f294cad3611872fea540783297f0a68ff3d-rootfs.mount: Deactivated successfully. Nov 23 23:05:13.220489 kubelet[2782]: I1123 23:05:13.219967 2782 status_manager.go:895] "Failed to get status for pod" podUID="c7c9f1f0-a20f-4cd1-87de-e2a910e5566a" pod="calico-system/calico-kube-controllers-74998f44b6-zvwmg" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44478->10.0.0.2:2379: read: connection timed out" Nov 23 23:05:13.220489 kubelet[2782]: E1123 23:05:13.220053 2782 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:44356->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-747559d9d9-cwq4m.187ac51d6d387674 calico-apiserver 1391 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-747559d9d9-cwq4m,UID:5a8e0f96-f5fe-436e-9782-031ed12b446f,APIVersion:v1,ResourceVersion:817,FieldPath:spec.containers{calico-apiserver},},Reason:Pulling,Message:Pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4459-2-1-9-52b78fad11,},FirstTimestamp:2025-11-23 23:02:14 +0000 UTC,LastTimestamp:2025-11-23 23:05:03.564200461 +0000 UTC m=+220.146606245,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-1-9-52b78fad11,}" Nov 23 23:05:13.390616 kubelet[2782]: I1123 23:05:13.390565 2782 scope.go:117] "RemoveContainer" containerID="dae18f5d8d6fe6160dbd3da7036b0f294cad3611872fea540783297f0a68ff3d" Nov 23 23:05:13.394750 kubelet[2782]: I1123 23:05:13.394699 2782 scope.go:117] "RemoveContainer" containerID="46ad6a4660aa5d0b7b7f28e59100e80954756f03500f16025864e3ae6cef15be" Nov 23 23:05:13.398023 containerd[1525]: time="2025-11-23T23:05:13.397782325Z" level=info msg="CreateContainer within sandbox \"474eb53c4f8916412d299b039d35ac68b1bea8116c04dd1234e33890c36f8637\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 23 23:05:13.398023 containerd[1525]: time="2025-11-23T23:05:13.397901007Z" level=info msg="CreateContainer within sandbox \"24b2a11a2740fb9ac401dcfafc8fd6127c184543193bb40f030fd83a91015812\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 23 23:05:13.409603 containerd[1525]: time="2025-11-23T23:05:13.409551311Z" level=info msg="Container a6e0388950daa0d20a466e1ee8cb11da6c311c8550a1bbce84c6e1fac2aa88e8: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:13.418374 containerd[1525]: time="2025-11-23T23:05:13.417579398Z" level=info msg="Container 13bd2caebaf040e0a020c63741984ac1921a0b0ad36651cb74532f6d69c12519: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:13.427915 containerd[1525]: time="2025-11-23T23:05:13.427860280Z" level=info msg="CreateContainer within sandbox \"24b2a11a2740fb9ac401dcfafc8fd6127c184543193bb40f030fd83a91015812\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a6e0388950daa0d20a466e1ee8cb11da6c311c8550a1bbce84c6e1fac2aa88e8\"" Nov 23 23:05:13.429095 containerd[1525]: time="2025-11-23T23:05:13.429058859Z" level=info msg="StartContainer for \"a6e0388950daa0d20a466e1ee8cb11da6c311c8550a1bbce84c6e1fac2aa88e8\"" Nov 23 23:05:13.431844 containerd[1525]: time="2025-11-23T23:05:13.431795582Z" level=info msg="connecting to shim a6e0388950daa0d20a466e1ee8cb11da6c311c8550a1bbce84c6e1fac2aa88e8" address="unix:///run/containerd/s/22f79a000d9cf82fa8cf619a8879880dcddf20e3c9841bf05c8b439bf64423a5" protocol=ttrpc version=3 Nov 23 23:05:13.435630 containerd[1525]: time="2025-11-23T23:05:13.435318718Z" level=info msg="CreateContainer within sandbox \"474eb53c4f8916412d299b039d35ac68b1bea8116c04dd1234e33890c36f8637\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"13bd2caebaf040e0a020c63741984ac1921a0b0ad36651cb74532f6d69c12519\"" Nov 23 23:05:13.436966 containerd[1525]: time="2025-11-23T23:05:13.436916383Z" level=info msg="StartContainer for \"13bd2caebaf040e0a020c63741984ac1921a0b0ad36651cb74532f6d69c12519\"" Nov 23 23:05:13.439727 containerd[1525]: time="2025-11-23T23:05:13.439672547Z" level=info msg="connecting to shim 13bd2caebaf040e0a020c63741984ac1921a0b0ad36651cb74532f6d69c12519" address="unix:///run/containerd/s/d9447609d5c08af04bc09b7aa50fac0b97bc4b468e48dfedca7f89053b5b7733" protocol=ttrpc version=3 Nov 23 23:05:13.466536 systemd[1]: Started cri-containerd-13bd2caebaf040e0a020c63741984ac1921a0b0ad36651cb74532f6d69c12519.scope - libcontainer container 13bd2caebaf040e0a020c63741984ac1921a0b0ad36651cb74532f6d69c12519. Nov 23 23:05:13.475025 systemd[1]: Started cri-containerd-a6e0388950daa0d20a466e1ee8cb11da6c311c8550a1bbce84c6e1fac2aa88e8.scope - libcontainer container a6e0388950daa0d20a466e1ee8cb11da6c311c8550a1bbce84c6e1fac2aa88e8. Nov 23 23:05:13.542015 containerd[1525]: time="2025-11-23T23:05:13.541957723Z" level=info msg="StartContainer for \"13bd2caebaf040e0a020c63741984ac1921a0b0ad36651cb74532f6d69c12519\" returns successfully" Nov 23 23:05:13.573637 containerd[1525]: time="2025-11-23T23:05:13.573590382Z" level=info msg="StartContainer for \"a6e0388950daa0d20a466e1ee8cb11da6c311c8550a1bbce84c6e1fac2aa88e8\" returns successfully" Nov 23 23:05:16.562419 kubelet[2782]: E1123 23:05:16.561828 2782 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747559d9d9-cwq4m" podUID="5a8e0f96-f5fe-436e-9782-031ed12b446f"