Jan 28 00:47:16.078906 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Jan 28 00:47:16.078924 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Jan 27 22:35:34 -00 2026 Jan 28 00:47:16.078938 kernel: KASLR enabled Jan 28 00:47:16.078942 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 28 00:47:16.078945 kernel: printk: legacy bootconsole [pl11] enabled Jan 28 00:47:16.078951 kernel: efi: EFI v2.7 by EDK II Jan 28 00:47:16.078956 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e3f9018 RNG=0x3f979998 MEMRESERVE=0x3db83598 Jan 28 00:47:16.078960 kernel: random: crng init done Jan 28 00:47:16.078964 kernel: secureboot: Secure boot disabled Jan 28 00:47:16.078967 kernel: ACPI: Early table checksum verification disabled Jan 28 00:47:16.078971 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Jan 28 00:47:16.078975 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:16.078979 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:16.078983 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 28 00:47:16.078989 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:16.078994 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:16.078998 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:16.079002 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:16.079006 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:16.079011 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:16.079016 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 28 00:47:16.079020 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 28 00:47:16.079024 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 28 00:47:16.079028 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 28 00:47:16.079032 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 28 00:47:16.079037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Jan 28 00:47:16.079041 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Jan 28 00:47:16.079045 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 28 00:47:16.079049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 28 00:47:16.079054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 28 00:47:16.079059 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 28 00:47:16.079063 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 28 00:47:16.079067 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 28 00:47:16.079072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 28 00:47:16.079076 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 28 00:47:16.079080 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 28 00:47:16.079084 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Jan 28 00:47:16.079089 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Jan 28 00:47:16.079093 kernel: Zone ranges: Jan 28 00:47:16.079097 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 28 00:47:16.079104 kernel: DMA32 empty Jan 28 00:47:16.079109 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 00:47:16.079113 kernel: Device empty Jan 28 00:47:16.079117 kernel: Movable zone start for each node Jan 28 00:47:16.079121 kernel: Early memory node ranges Jan 28 00:47:16.079126 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 28 00:47:16.079131 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Jan 28 00:47:16.079136 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Jan 28 00:47:16.079140 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Jan 28 00:47:16.079144 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Jan 28 00:47:16.079149 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Jan 28 00:47:16.079153 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 28 00:47:16.079157 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 28 00:47:16.079162 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 28 00:47:16.079166 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Jan 28 00:47:16.079170 kernel: psci: probing for conduit method from ACPI. Jan 28 00:47:16.079175 kernel: psci: PSCIv1.3 detected in firmware. Jan 28 00:47:16.079179 kernel: psci: Using standard PSCI v0.2 function IDs Jan 28 00:47:16.079184 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 28 00:47:16.079189 kernel: psci: SMC Calling Convention v1.4 Jan 28 00:47:16.079193 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 28 00:47:16.079197 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 28 00:47:16.079202 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 28 00:47:16.079206 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 28 00:47:16.079211 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 28 00:47:16.079215 kernel: Detected PIPT I-cache on CPU0 Jan 28 00:47:16.079219 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Jan 28 00:47:16.079224 kernel: CPU features: detected: GIC system register CPU interface Jan 28 00:47:16.079228 kernel: CPU features: detected: Spectre-v4 Jan 28 00:47:16.079233 kernel: CPU features: detected: Spectre-BHB Jan 28 00:47:16.079238 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 28 00:47:16.079242 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 28 00:47:16.079247 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Jan 28 00:47:16.079251 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 28 00:47:16.079255 kernel: alternatives: applying boot alternatives Jan 28 00:47:16.079261 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f94df361d6ccbf6d3bccdda215ef8c4de18f0915f7435d65b20126d9bf4aaef1 Jan 28 00:47:16.079265 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 00:47:16.079287 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 00:47:16.079292 kernel: Fallback order for Node 0: 0 Jan 28 00:47:16.079296 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Jan 28 00:47:16.079302 kernel: Policy zone: Normal Jan 28 00:47:16.079306 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 00:47:16.079311 kernel: software IO TLB: area num 2. Jan 28 00:47:16.079315 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Jan 28 00:47:16.079319 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 28 00:47:16.079324 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 00:47:16.079329 kernel: rcu: RCU event tracing is enabled. Jan 28 00:47:16.079333 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 28 00:47:16.079338 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 00:47:16.079342 kernel: Tracing variant of Tasks RCU enabled. Jan 28 00:47:16.079347 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 00:47:16.079351 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 28 00:47:16.079357 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 00:47:16.079361 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 00:47:16.079365 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 28 00:47:16.079370 kernel: GICv3: 960 SPIs implemented Jan 28 00:47:16.079374 kernel: GICv3: 0 Extended SPIs implemented Jan 28 00:47:16.079378 kernel: Root IRQ handler: gic_handle_irq Jan 28 00:47:16.079383 kernel: GICv3: GICv3 features: 16 PPIs, RSS Jan 28 00:47:16.079387 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Jan 28 00:47:16.079392 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 28 00:47:16.079396 kernel: ITS: No ITS available, not enabling LPIs Jan 28 00:47:16.079400 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 00:47:16.079406 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Jan 28 00:47:16.079410 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 00:47:16.079415 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Jan 28 00:47:16.079420 kernel: Console: colour dummy device 80x25 Jan 28 00:47:16.079425 kernel: printk: legacy console [tty1] enabled Jan 28 00:47:16.079429 kernel: ACPI: Core revision 20240827 Jan 28 00:47:16.079434 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Jan 28 00:47:16.079439 kernel: pid_max: default: 32768 minimum: 301 Jan 28 00:47:16.079443 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 00:47:16.079448 kernel: landlock: Up and running. Jan 28 00:47:16.079453 kernel: SELinux: Initializing. Jan 28 00:47:16.079458 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:47:16.079462 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:47:16.079467 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Jan 28 00:47:16.079472 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Jan 28 00:47:16.079480 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 28 00:47:16.079485 kernel: rcu: Hierarchical SRCU implementation. Jan 28 00:47:16.079490 kernel: rcu: Max phase no-delay instances is 400. Jan 28 00:47:16.079495 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 00:47:16.079500 kernel: Remapping and enabling EFI services. Jan 28 00:47:16.079504 kernel: smp: Bringing up secondary CPUs ... Jan 28 00:47:16.079509 kernel: Detected PIPT I-cache on CPU1 Jan 28 00:47:16.079515 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 28 00:47:16.079520 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Jan 28 00:47:16.079524 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 00:47:16.079529 kernel: SMP: Total of 2 processors activated. Jan 28 00:47:16.079534 kernel: CPU: All CPU(s) started at EL1 Jan 28 00:47:16.079540 kernel: CPU features: detected: 32-bit EL0 Support Jan 28 00:47:16.079545 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 28 00:47:16.079549 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 28 00:47:16.079554 kernel: CPU features: detected: Common not Private translations Jan 28 00:47:16.079559 kernel: CPU features: detected: CRC32 instructions Jan 28 00:47:16.079564 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Jan 28 00:47:16.079569 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 28 00:47:16.079573 kernel: CPU features: detected: LSE atomic instructions Jan 28 00:47:16.079578 kernel: CPU features: detected: Privileged Access Never Jan 28 00:47:16.079584 kernel: CPU features: detected: Speculation barrier (SB) Jan 28 00:47:16.079589 kernel: CPU features: detected: TLB range maintenance instructions Jan 28 00:47:16.079594 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 28 00:47:16.079598 kernel: CPU features: detected: Scalable Vector Extension Jan 28 00:47:16.079603 kernel: alternatives: applying system-wide alternatives Jan 28 00:47:16.079608 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 28 00:47:16.079613 kernel: SVE: maximum available vector length 16 bytes per vector Jan 28 00:47:16.079617 kernel: SVE: default vector length 16 bytes per vector Jan 28 00:47:16.079623 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Jan 28 00:47:16.079628 kernel: devtmpfs: initialized Jan 28 00:47:16.079633 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 00:47:16.079638 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 28 00:47:16.079643 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 28 00:47:16.079647 kernel: 0 pages in range for non-PLT usage Jan 28 00:47:16.079652 kernel: 508400 pages in range for PLT usage Jan 28 00:47:16.079657 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 00:47:16.079662 kernel: SMBIOS 3.1.0 present. Jan 28 00:47:16.079667 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 28 00:47:16.079672 kernel: DMI: Memory slots populated: 2/2 Jan 28 00:47:16.079677 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 00:47:16.079682 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 28 00:47:16.079687 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 28 00:47:16.079692 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 28 00:47:16.079697 kernel: audit: initializing netlink subsys (disabled) Jan 28 00:47:16.079702 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Jan 28 00:47:16.079706 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 00:47:16.079712 kernel: cpuidle: using governor menu Jan 28 00:47:16.079717 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 28 00:47:16.079722 kernel: ASID allocator initialised with 32768 entries Jan 28 00:47:16.079726 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 00:47:16.079731 kernel: Serial: AMBA PL011 UART driver Jan 28 00:47:16.079736 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 00:47:16.079741 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 00:47:16.079746 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 28 00:47:16.079750 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 28 00:47:16.079756 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 00:47:16.079761 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 00:47:16.079766 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 28 00:47:16.079771 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 28 00:47:16.079775 kernel: ACPI: Added _OSI(Module Device) Jan 28 00:47:16.079780 kernel: ACPI: Added _OSI(Processor Device) Jan 28 00:47:16.079785 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 00:47:16.079790 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 00:47:16.079795 kernel: ACPI: Interpreter enabled Jan 28 00:47:16.079800 kernel: ACPI: Using GIC for interrupt routing Jan 28 00:47:16.079805 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 28 00:47:16.079810 kernel: printk: legacy console [ttyAMA0] enabled Jan 28 00:47:16.079815 kernel: printk: legacy bootconsole [pl11] disabled Jan 28 00:47:16.079820 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 28 00:47:16.079824 kernel: ACPI: CPU0 has been hot-added Jan 28 00:47:16.079829 kernel: ACPI: CPU1 has been hot-added Jan 28 00:47:16.079834 kernel: iommu: Default domain type: Translated Jan 28 00:47:16.079839 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 28 00:47:16.079844 kernel: efivars: Registered efivars operations Jan 28 00:47:16.079849 kernel: vgaarb: loaded Jan 28 00:47:16.079854 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 28 00:47:16.079859 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 00:47:16.079863 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 00:47:16.079868 kernel: pnp: PnP ACPI init Jan 28 00:47:16.079873 kernel: pnp: PnP ACPI: found 0 devices Jan 28 00:47:16.079878 kernel: NET: Registered PF_INET protocol family Jan 28 00:47:16.079882 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 00:47:16.079887 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 00:47:16.079893 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 00:47:16.079898 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 00:47:16.079902 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 00:47:16.079907 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 00:47:16.079912 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:47:16.079917 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:47:16.079922 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 00:47:16.079927 kernel: PCI: CLS 0 bytes, default 64 Jan 28 00:47:16.079931 kernel: kvm [1]: HYP mode not available Jan 28 00:47:16.079937 kernel: Initialise system trusted keyrings Jan 28 00:47:16.079942 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 00:47:16.079946 kernel: Key type asymmetric registered Jan 28 00:47:16.079951 kernel: Asymmetric key parser 'x509' registered Jan 28 00:47:16.079956 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 28 00:47:16.079961 kernel: io scheduler mq-deadline registered Jan 28 00:47:16.079965 kernel: io scheduler kyber registered Jan 28 00:47:16.079970 kernel: io scheduler bfq registered Jan 28 00:47:16.079975 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 00:47:16.079980 kernel: thunder_xcv, ver 1.0 Jan 28 00:47:16.079985 kernel: thunder_bgx, ver 1.0 Jan 28 00:47:16.079990 kernel: nicpf, ver 1.0 Jan 28 00:47:16.079994 kernel: nicvf, ver 1.0 Jan 28 00:47:16.080109 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 28 00:47:16.080160 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-28T00:47:15 UTC (1769561235) Jan 28 00:47:16.080166 kernel: efifb: probing for efifb Jan 28 00:47:16.080173 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 28 00:47:16.080178 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 28 00:47:16.080182 kernel: efifb: scrolling: redraw Jan 28 00:47:16.080187 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 28 00:47:16.080192 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 00:47:16.080197 kernel: fb0: EFI VGA frame buffer device Jan 28 00:47:16.080201 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 28 00:47:16.080206 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 00:47:16.080211 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 28 00:47:16.080217 kernel: watchdog: NMI not fully supported Jan 28 00:47:16.080222 kernel: watchdog: Hard watchdog permanently disabled Jan 28 00:47:16.080227 kernel: NET: Registered PF_INET6 protocol family Jan 28 00:47:16.080231 kernel: Segment Routing with IPv6 Jan 28 00:47:16.080236 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 00:47:16.080241 kernel: NET: Registered PF_PACKET protocol family Jan 28 00:47:16.080246 kernel: Key type dns_resolver registered Jan 28 00:47:16.080250 kernel: registered taskstats version 1 Jan 28 00:47:16.080255 kernel: Loading compiled-in X.509 certificates Jan 28 00:47:16.080260 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 79637fe16a8be85dde8ec0d00305a4ac90a53e25' Jan 28 00:47:16.080266 kernel: Demotion targets for Node 0: null Jan 28 00:47:16.080301 kernel: Key type .fscrypt registered Jan 28 00:47:16.080306 kernel: Key type fscrypt-provisioning registered Jan 28 00:47:16.080311 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 00:47:16.080315 kernel: ima: Allocated hash algorithm: sha1 Jan 28 00:47:16.080320 kernel: ima: No architecture policies found Jan 28 00:47:16.080325 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 28 00:47:16.080330 kernel: clk: Disabling unused clocks Jan 28 00:47:16.080334 kernel: PM: genpd: Disabling unused power domains Jan 28 00:47:16.080341 kernel: Warning: unable to open an initial console. Jan 28 00:47:16.080345 kernel: Freeing unused kernel memory: 39552K Jan 28 00:47:16.080350 kernel: Run /init as init process Jan 28 00:47:16.080355 kernel: with arguments: Jan 28 00:47:16.080360 kernel: /init Jan 28 00:47:16.080364 kernel: with environment: Jan 28 00:47:16.080369 kernel: HOME=/ Jan 28 00:47:16.080373 kernel: TERM=linux Jan 28 00:47:16.080379 systemd[1]: Successfully made /usr/ read-only. Jan 28 00:47:16.080387 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 00:47:16.080393 systemd[1]: Detected virtualization microsoft. Jan 28 00:47:16.080398 systemd[1]: Detected architecture arm64. Jan 28 00:47:16.080403 systemd[1]: Running in initrd. Jan 28 00:47:16.080408 systemd[1]: No hostname configured, using default hostname. Jan 28 00:47:16.080414 systemd[1]: Hostname set to . Jan 28 00:47:16.080419 systemd[1]: Initializing machine ID from random generator. Jan 28 00:47:16.080425 systemd[1]: Queued start job for default target initrd.target. Jan 28 00:47:16.080430 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:47:16.080435 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:47:16.080441 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 00:47:16.080446 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:47:16.080451 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 00:47:16.080457 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 00:47:16.080464 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 00:47:16.080470 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 00:47:16.080475 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:47:16.080480 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:47:16.080485 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:47:16.080490 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:47:16.080495 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:47:16.080500 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:47:16.080506 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:47:16.080512 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:47:16.080517 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 00:47:16.080522 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 00:47:16.080528 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:47:16.080533 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:47:16.080538 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:47:16.080543 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:47:16.080549 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 00:47:16.080555 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:47:16.080560 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 00:47:16.080565 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 00:47:16.080571 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 00:47:16.080576 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:47:16.080581 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:47:16.080600 systemd-journald[225]: Collecting audit messages is disabled. Jan 28 00:47:16.080615 systemd-journald[225]: Journal started Jan 28 00:47:16.080628 systemd-journald[225]: Runtime Journal (/run/log/journal/09931d915d0d4a3793fc29ce1c06bca9) is 8M, max 78.3M, 70.3M free. Jan 28 00:47:16.086307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:47:16.091386 systemd-modules-load[227]: Inserted module 'overlay' Jan 28 00:47:16.111290 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 00:47:16.119047 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:47:16.119089 kernel: Bridge firewalling registered Jan 28 00:47:16.119150 systemd-modules-load[227]: Inserted module 'br_netfilter' Jan 28 00:47:16.124663 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 00:47:16.129854 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:47:16.147806 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 00:47:16.156409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:47:16.164664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:16.176330 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:47:16.200804 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:47:16.212366 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:47:16.229413 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:47:16.240953 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:47:16.243011 systemd-tmpfiles[256]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 00:47:16.254271 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:47:16.263794 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:47:16.274056 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:47:16.289395 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 00:47:16.309052 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:47:16.325293 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=f94df361d6ccbf6d3bccdda215ef8c4de18f0915f7435d65b20126d9bf4aaef1 Jan 28 00:47:16.330407 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:47:16.376928 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:47:16.396501 systemd-resolved[265]: Positive Trust Anchors: Jan 28 00:47:16.396515 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:47:16.396541 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:47:16.398198 systemd-resolved[265]: Defaulting to hostname 'linux'. Jan 28 00:47:16.399985 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:47:16.406428 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:47:16.488295 kernel: SCSI subsystem initialized Jan 28 00:47:16.494300 kernel: Loading iSCSI transport class v2.0-870. Jan 28 00:47:16.502300 kernel: iscsi: registered transport (tcp) Jan 28 00:47:16.515126 kernel: iscsi: registered transport (qla4xxx) Jan 28 00:47:16.515137 kernel: QLogic iSCSI HBA Driver Jan 28 00:47:16.529398 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:47:16.553041 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:47:16.560305 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:47:16.611696 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 00:47:16.618400 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 00:47:16.674297 kernel: raid6: neonx8 gen() 18541 MB/s Jan 28 00:47:16.693280 kernel: raid6: neonx4 gen() 18568 MB/s Jan 28 00:47:16.712293 kernel: raid6: neonx2 gen() 17074 MB/s Jan 28 00:47:16.732281 kernel: raid6: neonx1 gen() 15004 MB/s Jan 28 00:47:16.751277 kernel: raid6: int64x8 gen() 10560 MB/s Jan 28 00:47:16.770277 kernel: raid6: int64x4 gen() 10612 MB/s Jan 28 00:47:16.790278 kernel: raid6: int64x2 gen() 8980 MB/s Jan 28 00:47:16.812391 kernel: raid6: int64x1 gen() 7018 MB/s Jan 28 00:47:16.812399 kernel: raid6: using algorithm neonx4 gen() 18568 MB/s Jan 28 00:47:16.835378 kernel: raid6: .... xor() 15149 MB/s, rmw enabled Jan 28 00:47:16.835437 kernel: raid6: using neon recovery algorithm Jan 28 00:47:16.844236 kernel: xor: measuring software checksum speed Jan 28 00:47:16.844323 kernel: 8regs : 28619 MB/sec Jan 28 00:47:16.847828 kernel: 32regs : 28755 MB/sec Jan 28 00:47:16.850310 kernel: arm64_neon : 37600 MB/sec Jan 28 00:47:16.853368 kernel: xor: using function: arm64_neon (37600 MB/sec) Jan 28 00:47:16.892299 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 00:47:16.897370 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:47:16.907427 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:47:16.933864 systemd-udevd[476]: Using default interface naming scheme 'v255'. Jan 28 00:47:16.936817 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:47:16.946340 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 00:47:16.978691 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Jan 28 00:47:16.998932 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:47:17.010154 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:47:17.057880 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:47:17.070257 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 00:47:17.130304 kernel: hv_vmbus: Vmbus version:5.3 Jan 28 00:47:17.140553 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:47:17.161812 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 28 00:47:17.161847 kernel: hv_vmbus: registering driver hv_storvsc Jan 28 00:47:17.161854 kernel: hv_vmbus: registering driver hv_netvsc Jan 28 00:47:17.161861 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 28 00:47:17.161868 kernel: scsi host0: storvsc_host_t Jan 28 00:47:17.140678 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:17.188019 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 28 00:47:17.188062 kernel: scsi host1: storvsc_host_t Jan 28 00:47:17.188194 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 28 00:47:17.188201 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 28 00:47:17.176381 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:47:17.216453 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 28 00:47:17.216482 kernel: hv_vmbus: registering driver hid_hyperv Jan 28 00:47:17.216489 kernel: PTP clock support registered Jan 28 00:47:17.203636 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:47:17.250495 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 28 00:47:17.250516 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 28 00:47:17.250656 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 28 00:47:17.250097 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 28 00:47:17.269566 kernel: hv_utils: Registering HyperV Utility Driver Jan 28 00:47:17.269585 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 28 00:47:17.269731 kernel: hv_vmbus: registering driver hv_utils Jan 28 00:47:17.269738 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 28 00:47:17.269805 kernel: hv_utils: Heartbeat IC version 3.0 Jan 28 00:47:17.251523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:47:17.358379 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 28 00:47:17.358538 kernel: hv_utils: Shutdown IC version 3.2 Jan 28 00:47:17.358546 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 28 00:47:17.358642 kernel: hv_netvsc 7ced8d87-be92-7ced-8d87-be927ced8d87 eth0: VF slot 1 added Jan 28 00:47:17.358721 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#261 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 28 00:47:17.358782 kernel: hv_utils: TimeSync IC version 4.0 Jan 28 00:47:17.358788 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#268 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 28 00:47:17.251595 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:17.344754 systemd-resolved[265]: Clock change detected. Flushing caches. Jan 28 00:47:17.349287 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:47:17.377083 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 00:47:17.382927 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 28 00:47:17.393937 kernel: hv_vmbus: registering driver hv_pci Jan 28 00:47:17.393989 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 28 00:47:17.394133 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 00:47:17.399659 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:17.413725 kernel: hv_pci 8a381bd0-4cb5-477a-b64d-f849334ae1e0: PCI VMBus probing: Using version 0x10004 Jan 28 00:47:17.413897 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 28 00:47:17.413999 kernel: hv_pci 8a381bd0-4cb5-477a-b64d-f849334ae1e0: PCI host bridge to bus 4cb5:00 Jan 28 00:47:17.424346 kernel: pci_bus 4cb5:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 28 00:47:17.429379 kernel: pci_bus 4cb5:00: No busn resource found for root bus, will use [bus 00-ff] Jan 28 00:47:17.439442 kernel: pci 4cb5:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Jan 28 00:47:17.451619 kernel: pci 4cb5:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 28 00:47:17.451657 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#46 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 00:47:17.456968 kernel: pci 4cb5:00:02.0: enabling Extended Tags Jan 28 00:47:17.473964 kernel: pci 4cb5:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 4cb5:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Jan 28 00:47:17.483181 kernel: pci_bus 4cb5:00: busn_res: [bus 00-ff] end is updated to 00 Jan 28 00:47:17.483392 kernel: pci 4cb5:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Jan 28 00:47:17.494931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#283 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 00:47:17.550625 kernel: mlx5_core 4cb5:00:02.0: enabling device (0000 -> 0002) Jan 28 00:47:17.558912 kernel: mlx5_core 4cb5:00:02.0: PTM is not supported by PCIe Jan 28 00:47:17.559040 kernel: mlx5_core 4cb5:00:02.0: firmware version: 16.30.5026 Jan 28 00:47:17.738679 kernel: hv_netvsc 7ced8d87-be92-7ced-8d87-be927ced8d87 eth0: VF registering: eth1 Jan 28 00:47:17.738904 kernel: mlx5_core 4cb5:00:02.0 eth1: joined to eth0 Jan 28 00:47:17.746925 kernel: mlx5_core 4cb5:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 28 00:47:17.759932 kernel: mlx5_core 4cb5:00:02.0 enP19637s1: renamed from eth1 Jan 28 00:47:17.913298 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 28 00:47:18.025755 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 28 00:47:18.047206 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 00:47:18.452560 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 28 00:47:18.458231 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 28 00:47:18.470927 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 00:47:18.481063 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:47:18.489766 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:47:18.499853 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:47:18.509554 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 00:47:18.538634 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 00:47:18.561364 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:47:18.574080 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#286 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 28 00:47:18.580929 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 00:47:19.593781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#25 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Jan 28 00:47:19.607867 disk-uuid[665]: The operation has completed successfully. Jan 28 00:47:19.612930 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 00:47:19.680005 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 00:47:19.683068 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 00:47:19.709677 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 00:47:19.729164 sh[823]: Success Jan 28 00:47:19.764868 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 00:47:19.764949 kernel: device-mapper: uevent: version 1.0.3 Jan 28 00:47:19.770121 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 00:47:19.780937 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 28 00:47:20.071183 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 00:47:20.086088 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 00:47:20.090923 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 00:47:20.112929 kernel: BTRFS: device fsid a5f8185f-aa1a-4e36-bd3e-ad4fa971117f devid 1 transid 35 /dev/mapper/usr (254:0) scanned by mount (841) Jan 28 00:47:20.122666 kernel: BTRFS info (device dm-0): first mount of filesystem a5f8185f-aa1a-4e36-bd3e-ad4fa971117f Jan 28 00:47:20.122702 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:47:20.427148 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 00:47:20.427243 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 00:47:20.458953 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 00:47:20.463292 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 00:47:20.470456 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 00:47:20.471180 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 00:47:20.491880 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 00:47:20.517945 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (864) Jan 28 00:47:20.529789 kernel: BTRFS info (device sda6): first mount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:47:20.529857 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:47:20.559680 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:47:20.559747 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:47:20.570983 kernel: BTRFS info (device sda6): last unmount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:47:20.572619 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 00:47:20.578865 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 00:47:20.627746 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:47:20.641340 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:47:20.676144 systemd-networkd[1010]: lo: Link UP Jan 28 00:47:20.676156 systemd-networkd[1010]: lo: Gained carrier Jan 28 00:47:20.676878 systemd-networkd[1010]: Enumeration completed Jan 28 00:47:20.679178 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:47:20.679680 systemd-networkd[1010]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:47:20.679684 systemd-networkd[1010]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:47:20.687680 systemd[1]: Reached target network.target - Network. Jan 28 00:47:20.763932 kernel: mlx5_core 4cb5:00:02.0 enP19637s1: Link up Jan 28 00:47:20.804929 kernel: hv_netvsc 7ced8d87-be92-7ced-8d87-be927ced8d87 eth0: Data path switched to VF: enP19637s1 Jan 28 00:47:20.804968 systemd-networkd[1010]: enP19637s1: Link UP Jan 28 00:47:20.805029 systemd-networkd[1010]: eth0: Link UP Jan 28 00:47:20.805096 systemd-networkd[1010]: eth0: Gained carrier Jan 28 00:47:20.805109 systemd-networkd[1010]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:47:20.825132 systemd-networkd[1010]: enP19637s1: Gained carrier Jan 28 00:47:20.833947 systemd-networkd[1010]: eth0: DHCPv4 address 10.200.20.30/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 00:47:21.861057 ignition[949]: Ignition 2.22.0 Jan 28 00:47:21.861069 ignition[949]: Stage: fetch-offline Jan 28 00:47:21.864358 ignition[949]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:21.868177 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:47:21.864367 ignition[949]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:21.877884 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 00:47:21.864443 ignition[949]: parsed url from cmdline: "" Jan 28 00:47:21.864446 ignition[949]: no config URL provided Jan 28 00:47:21.864450 ignition[949]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:47:21.864456 ignition[949]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:47:21.864461 ignition[949]: failed to fetch config: resource requires networking Jan 28 00:47:21.864603 ignition[949]: Ignition finished successfully Jan 28 00:47:21.911998 ignition[1020]: Ignition 2.22.0 Jan 28 00:47:21.912004 ignition[1020]: Stage: fetch Jan 28 00:47:21.912203 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:21.912210 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:21.912271 ignition[1020]: parsed url from cmdline: "" Jan 28 00:47:21.912274 ignition[1020]: no config URL provided Jan 28 00:47:21.912278 ignition[1020]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:47:21.912284 ignition[1020]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:47:21.912298 ignition[1020]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 28 00:47:22.032983 ignition[1020]: GET result: OK Jan 28 00:47:22.033047 ignition[1020]: config has been read from IMDS userdata Jan 28 00:47:22.036264 unknown[1020]: fetched base config from "system" Jan 28 00:47:22.033074 ignition[1020]: parsing config with SHA512: 395b5d68d80440d48ea503b8761e4f3b5664bb8c93e43448ec5dd63810786c975f7ac5dfede4c1ee2fd4be058e772c219c2872b243531c85bdc959af432c9d95 Jan 28 00:47:22.036277 unknown[1020]: fetched base config from "system" Jan 28 00:47:22.036607 ignition[1020]: fetch: fetch complete Jan 28 00:47:22.036282 unknown[1020]: fetched user config from "azure" Jan 28 00:47:22.036612 ignition[1020]: fetch: fetch passed Jan 28 00:47:22.038617 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 00:47:22.036661 ignition[1020]: Ignition finished successfully Jan 28 00:47:22.046396 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 00:47:22.095330 ignition[1027]: Ignition 2.22.0 Jan 28 00:47:22.095344 ignition[1027]: Stage: kargs Jan 28 00:47:22.095532 ignition[1027]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:22.095538 ignition[1027]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:22.103971 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 00:47:22.098933 ignition[1027]: kargs: kargs passed Jan 28 00:47:22.112517 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 00:47:22.098997 ignition[1027]: Ignition finished successfully Jan 28 00:47:22.143904 ignition[1033]: Ignition 2.22.0 Jan 28 00:47:22.143957 ignition[1033]: Stage: disks Jan 28 00:47:22.148329 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 00:47:22.144138 ignition[1033]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:22.155416 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 00:47:22.144145 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:22.164130 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 00:47:22.144702 ignition[1033]: disks: disks passed Jan 28 00:47:22.173413 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:47:22.144749 ignition[1033]: Ignition finished successfully Jan 28 00:47:22.182871 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:47:22.191974 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:47:22.202008 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 00:47:22.299090 systemd-fsck[1042]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 28 00:47:22.307999 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 00:47:22.314715 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 00:47:22.589947 kernel: EXT4-fs (sda9): mounted filesystem e7dac9ee-22c5-4146-a097-e1ea6c8c1663 r/w with ordered data mode. Quota mode: none. Jan 28 00:47:22.590318 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 00:47:22.594473 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 00:47:22.629032 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:47:22.634049 systemd-networkd[1010]: eth0: Gained IPv6LL Jan 28 00:47:22.643542 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 00:47:22.652938 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 28 00:47:22.665691 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 00:47:22.665732 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:47:22.672331 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 00:47:22.686947 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 00:47:22.714978 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1056) Jan 28 00:47:22.725508 kernel: BTRFS info (device sda6): first mount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:47:22.725558 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:47:22.735492 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:47:22.735551 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:47:22.736878 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:47:23.289472 coreos-metadata[1058]: Jan 28 00:47:23.289 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 00:47:23.298156 coreos-metadata[1058]: Jan 28 00:47:23.298 INFO Fetch successful Jan 28 00:47:23.302603 coreos-metadata[1058]: Jan 28 00:47:23.302 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 28 00:47:23.311421 coreos-metadata[1058]: Jan 28 00:47:23.308 INFO Fetch successful Jan 28 00:47:23.325444 coreos-metadata[1058]: Jan 28 00:47:23.325 INFO wrote hostname ci-4459.2.3-n-ee3b3e4916 to /sysroot/etc/hostname Jan 28 00:47:23.332842 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 00:47:23.469785 initrd-setup-root[1086]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 00:47:23.732929 initrd-setup-root[1093]: cut: /sysroot/etc/group: No such file or directory Jan 28 00:47:23.777415 initrd-setup-root[1100]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 00:47:23.784741 initrd-setup-root[1107]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 00:47:24.811423 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 00:47:24.817774 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 00:47:24.840670 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 00:47:24.851613 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 00:47:24.862924 kernel: BTRFS info (device sda6): last unmount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:47:24.884981 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 00:47:24.896562 ignition[1176]: INFO : Ignition 2.22.0 Jan 28 00:47:24.896562 ignition[1176]: INFO : Stage: mount Jan 28 00:47:24.904195 ignition[1176]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:24.904195 ignition[1176]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:24.904195 ignition[1176]: INFO : mount: mount passed Jan 28 00:47:24.904195 ignition[1176]: INFO : Ignition finished successfully Jan 28 00:47:24.901000 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 00:47:24.909719 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 00:47:24.940003 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:47:24.977365 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1187) Jan 28 00:47:24.977423 kernel: BTRFS info (device sda6): first mount of filesystem cdd8ade3-84ac-4b21-9ebd-f498f4c3bfc9 Jan 28 00:47:24.982867 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:47:24.993177 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:47:24.993238 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:47:24.994698 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:47:25.024132 ignition[1205]: INFO : Ignition 2.22.0 Jan 28 00:47:25.024132 ignition[1205]: INFO : Stage: files Jan 28 00:47:25.030496 ignition[1205]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:25.030496 ignition[1205]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:25.030496 ignition[1205]: DEBUG : files: compiled without relabeling support, skipping Jan 28 00:47:25.045051 ignition[1205]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 00:47:25.045051 ignition[1205]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 00:47:25.096689 ignition[1205]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 00:47:25.103151 ignition[1205]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 00:47:25.103151 ignition[1205]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 00:47:25.097088 unknown[1205]: wrote ssh authorized keys file for user: core Jan 28 00:47:25.123388 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 00:47:25.131969 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 28 00:47:25.156588 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 00:47:25.305889 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 00:47:25.305889 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 00:47:25.305889 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 00:47:25.305889 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:47:25.305889 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:47:25.305889 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:47:25.305889 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:47:25.305889 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:47:25.305889 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:47:25.399296 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:47:25.399296 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:47:25.399296 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:47:25.399296 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:47:25.399296 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:47:25.399296 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 28 00:47:25.938990 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 00:47:26.269649 ignition[1205]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:47:26.269649 ignition[1205]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 00:47:26.323949 ignition[1205]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:47:26.339763 ignition[1205]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:47:26.339763 ignition[1205]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 00:47:26.339763 ignition[1205]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 28 00:47:26.370543 ignition[1205]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 00:47:26.370543 ignition[1205]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:47:26.370543 ignition[1205]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:47:26.370543 ignition[1205]: INFO : files: files passed Jan 28 00:47:26.370543 ignition[1205]: INFO : Ignition finished successfully Jan 28 00:47:26.353390 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 00:47:26.360675 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 00:47:26.396935 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 00:47:26.409417 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 00:47:26.409515 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 00:47:26.436418 initrd-setup-root-after-ignition[1234]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:47:26.436418 initrd-setup-root-after-ignition[1234]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:47:26.455869 initrd-setup-root-after-ignition[1238]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:47:26.437113 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:47:26.448871 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 00:47:26.461308 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 00:47:26.516554 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 00:47:26.516668 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 00:47:26.526481 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 00:47:26.535881 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 00:47:26.544563 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 00:47:26.545403 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 00:47:26.578954 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:47:26.586397 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 00:47:26.609666 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:47:26.614854 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:47:26.625130 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 00:47:26.633734 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 00:47:26.633853 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:47:26.647002 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 00:47:26.651658 systemd[1]: Stopped target basic.target - Basic System. Jan 28 00:47:26.660129 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 00:47:26.669384 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:47:26.678071 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 00:47:26.687633 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 00:47:26.696752 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 00:47:26.706715 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:47:26.716841 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 00:47:26.725612 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 00:47:26.735623 systemd[1]: Stopped target swap.target - Swaps. Jan 28 00:47:26.743543 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 00:47:26.743663 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:47:26.755431 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:47:26.760873 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:47:26.770414 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 00:47:26.774686 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:47:26.780587 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 00:47:26.780694 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 00:47:26.795016 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 00:47:26.795116 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:47:26.800506 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 00:47:26.800584 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 00:47:26.809847 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 28 00:47:26.888652 ignition[1258]: INFO : Ignition 2.22.0 Jan 28 00:47:26.888652 ignition[1258]: INFO : Stage: umount Jan 28 00:47:26.888652 ignition[1258]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:47:26.888652 ignition[1258]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 28 00:47:26.888652 ignition[1258]: INFO : umount: umount passed Jan 28 00:47:26.888652 ignition[1258]: INFO : Ignition finished successfully Jan 28 00:47:26.809927 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 00:47:26.826095 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 00:47:26.841570 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 00:47:26.841735 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:47:26.854287 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 00:47:26.861230 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 00:47:26.861360 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:47:26.887116 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 00:47:26.887224 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:47:26.897469 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 00:47:26.897556 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 00:47:26.906020 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 00:47:26.906110 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 00:47:26.916329 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 00:47:26.916379 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 00:47:26.921095 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 00:47:26.921129 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 00:47:26.931306 systemd[1]: Stopped target network.target - Network. Jan 28 00:47:26.939142 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 00:47:26.939203 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:47:26.944541 systemd[1]: Stopped target paths.target - Path Units. Jan 28 00:47:26.948693 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 00:47:26.949245 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:47:26.959380 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 00:47:26.963284 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 00:47:26.977819 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 00:47:26.977871 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:47:26.987163 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 00:47:26.987215 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:47:26.996695 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 00:47:26.996752 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 00:47:27.009751 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 00:47:27.009803 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 00:47:27.018920 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 00:47:27.027449 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 00:47:27.037337 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 00:47:27.037818 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 00:47:27.037898 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 00:47:27.269159 kernel: hv_netvsc 7ced8d87-be92-7ced-8d87-be927ced8d87 eth0: Data path switched from VF: enP19637s1 Jan 28 00:47:27.051801 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 00:47:27.051883 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 00:47:27.068054 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 28 00:47:27.068248 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 00:47:27.068350 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 00:47:27.086485 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 28 00:47:27.088971 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 00:47:27.095506 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 00:47:27.095554 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:47:27.115041 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 00:47:27.123299 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 00:47:27.123371 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:47:27.133894 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 00:47:27.133960 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:47:27.148632 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 00:47:27.148671 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 00:47:27.154158 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 00:47:27.154208 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:47:27.164537 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:47:27.173055 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 28 00:47:27.173121 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 28 00:47:27.200221 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 00:47:27.201277 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:47:27.210455 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 00:47:27.210494 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 00:47:27.219570 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 00:47:27.219595 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:47:27.229249 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 00:47:27.229307 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:47:27.248139 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 00:47:27.248204 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 00:47:27.256550 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:47:27.256597 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:47:27.263121 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 00:47:27.273503 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 00:47:27.273590 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:47:27.287598 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 00:47:27.287653 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:47:27.302721 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:47:27.302794 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:27.318696 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 28 00:47:27.318747 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 28 00:47:27.318777 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 28 00:47:27.537758 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Jan 28 00:47:27.319084 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 00:47:27.319185 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 00:47:27.327863 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 00:47:27.327992 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 00:47:27.339992 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 00:47:27.340114 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 00:47:27.400419 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 00:47:27.400555 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 00:47:27.410593 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 00:47:27.421085 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 00:47:27.450682 systemd[1]: Switching root. Jan 28 00:47:27.583410 systemd-journald[225]: Journal stopped Jan 28 00:47:32.556644 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 00:47:32.556662 kernel: SELinux: policy capability open_perms=1 Jan 28 00:47:32.556670 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 00:47:32.556675 kernel: SELinux: policy capability always_check_network=0 Jan 28 00:47:32.556680 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 00:47:32.556687 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 00:47:32.556693 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 00:47:32.556699 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 00:47:32.556704 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 00:47:32.556709 kernel: audit: type=1403 audit(1769561248.876:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 00:47:32.556716 systemd[1]: Successfully loaded SELinux policy in 250.630ms. Jan 28 00:47:32.556724 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.499ms. Jan 28 00:47:32.556731 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 00:47:32.556737 systemd[1]: Detected virtualization microsoft. Jan 28 00:47:32.556743 systemd[1]: Detected architecture arm64. Jan 28 00:47:32.556749 systemd[1]: Detected first boot. Jan 28 00:47:32.556757 systemd[1]: Hostname set to . Jan 28 00:47:32.556763 systemd[1]: Initializing machine ID from random generator. Jan 28 00:47:32.556769 zram_generator::config[1301]: No configuration found. Jan 28 00:47:32.556776 kernel: NET: Registered PF_VSOCK protocol family Jan 28 00:47:32.556782 systemd[1]: Populated /etc with preset unit settings. Jan 28 00:47:32.556788 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 28 00:47:32.556794 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 00:47:32.556801 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 00:47:32.556807 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 00:47:32.556813 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 00:47:32.556819 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 00:47:32.556825 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 00:47:32.556831 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 00:47:32.556837 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 00:47:32.556844 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 00:47:32.556850 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 00:47:32.556856 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 00:47:32.556862 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:47:32.556868 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:47:32.556874 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 00:47:32.556880 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 00:47:32.556887 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 00:47:32.556894 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:47:32.556900 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 28 00:47:32.556923 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:47:32.556931 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:47:32.556937 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 00:47:32.556943 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 00:47:32.556949 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 00:47:32.556955 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 00:47:32.556962 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:47:32.556968 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:47:32.556974 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:47:32.556980 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:47:32.556986 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 00:47:32.556993 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 00:47:32.557000 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 00:47:32.557006 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:47:32.557013 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:47:32.557019 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:47:32.557025 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 00:47:32.557031 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 00:47:32.557037 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 00:47:32.557044 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 00:47:32.557050 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 00:47:32.557057 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 00:47:32.557064 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 00:47:32.557070 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 00:47:32.557076 systemd[1]: Reached target machines.target - Containers. Jan 28 00:47:32.557083 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 00:47:32.557089 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:47:32.557096 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:47:32.557102 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 00:47:32.557108 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:47:32.557115 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:47:32.557121 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:47:32.557127 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 00:47:32.557133 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:47:32.557140 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 00:47:32.557146 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 00:47:32.557153 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 00:47:32.557160 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 00:47:32.557166 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 00:47:32.557172 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:47:32.557178 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:47:32.557184 kernel: loop: module loaded Jan 28 00:47:32.557190 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:47:32.557197 kernel: fuse: init (API version 7.41) Jan 28 00:47:32.557204 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:47:32.557222 systemd-journald[1391]: Collecting audit messages is disabled. Jan 28 00:47:32.557237 systemd-journald[1391]: Journal started Jan 28 00:47:32.557252 systemd-journald[1391]: Runtime Journal (/run/log/journal/329c78c0aff94a19bb37414b90c8e62d) is 8M, max 78.3M, 70.3M free. Jan 28 00:47:31.812116 systemd[1]: Queued start job for default target multi-user.target. Jan 28 00:47:31.826488 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 28 00:47:31.826938 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 00:47:31.827206 systemd[1]: systemd-journald.service: Consumed 2.583s CPU time. Jan 28 00:47:32.565929 kernel: ACPI: bus type drm_connector registered Jan 28 00:47:32.579743 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 00:47:32.593160 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 00:47:32.606078 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:47:32.620411 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 00:47:32.620471 systemd[1]: Stopped verity-setup.service. Jan 28 00:47:32.632492 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:47:32.633273 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 00:47:32.638305 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 00:47:32.643614 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 00:47:32.648489 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 00:47:32.653981 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 00:47:32.660078 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 00:47:32.665258 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 00:47:32.670868 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:47:32.677013 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 00:47:32.677215 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 00:47:32.683031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:47:32.683235 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:47:32.688388 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:47:32.688580 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:47:32.694023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:47:32.694226 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:47:32.700138 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 00:47:32.700336 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 00:47:32.705649 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:47:32.705878 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:47:32.711791 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:47:32.716736 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:47:32.723617 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 00:47:32.729767 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 00:47:32.735652 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:47:32.751134 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:47:32.757876 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 00:47:32.773012 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 00:47:32.778139 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 00:47:32.778171 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:47:32.784104 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 00:47:32.790554 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 00:47:32.795376 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:47:32.798059 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 00:47:32.804061 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 00:47:32.809064 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:47:32.809955 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 00:47:32.815800 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:47:32.816718 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:47:32.824052 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 00:47:32.839389 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 00:47:32.846799 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 00:47:32.852491 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 00:47:32.861004 systemd-journald[1391]: Time spent on flushing to /var/log/journal/329c78c0aff94a19bb37414b90c8e62d is 11.569ms for 934 entries. Jan 28 00:47:32.861004 systemd-journald[1391]: System Journal (/var/log/journal/329c78c0aff94a19bb37414b90c8e62d) is 8M, max 2.6G, 2.6G free. Jan 28 00:47:32.899797 systemd-journald[1391]: Received client request to flush runtime journal. Jan 28 00:47:32.861110 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 00:47:32.874680 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 00:47:32.884075 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 00:47:32.901414 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 00:47:32.914928 kernel: loop0: detected capacity change from 0 to 27936 Jan 28 00:47:32.944422 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:47:32.956740 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 00:47:32.958072 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 00:47:33.015938 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 00:47:33.022670 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:47:33.076244 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Jan 28 00:47:33.076586 systemd-tmpfiles[1456]: ACLs are not supported, ignoring. Jan 28 00:47:33.079462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:47:33.324954 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 00:47:33.401537 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 00:47:33.409112 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:47:33.448462 systemd-udevd[1462]: Using default interface naming scheme 'v255'. Jan 28 00:47:33.453928 kernel: loop1: detected capacity change from 0 to 100632 Jan 28 00:47:33.652267 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:47:33.663192 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:47:33.706370 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 00:47:33.742485 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 28 00:47:33.814938 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 00:47:33.844521 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 00:47:33.869013 kernel: hv_vmbus: registering driver hv_balloon Jan 28 00:47:33.869104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#259 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 28 00:47:33.879304 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 28 00:47:33.879390 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 28 00:47:33.883926 kernel: hv_vmbus: registering driver hyperv_fb Jan 28 00:47:33.913824 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 28 00:47:33.919519 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 28 00:47:33.924434 kernel: Console: switching to colour dummy device 80x25 Jan 28 00:47:33.932414 kernel: Console: switching to colour frame buffer device 128x48 Jan 28 00:47:33.954144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:47:33.964579 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:47:33.964745 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:33.973598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:47:34.083752 systemd-networkd[1473]: lo: Link UP Jan 28 00:47:34.084052 systemd-networkd[1473]: lo: Gained carrier Jan 28 00:47:34.085825 systemd-networkd[1473]: Enumeration completed Jan 28 00:47:34.087050 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:47:34.087136 systemd-networkd[1473]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:47:34.089202 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:47:34.092927 kernel: MACsec IEEE 802.1AE Jan 28 00:47:34.097459 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 28 00:47:34.104029 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 00:47:34.111083 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 00:47:34.119040 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 00:47:34.129183 kernel: loop2: detected capacity change from 0 to 119840 Jan 28 00:47:34.172927 kernel: mlx5_core 4cb5:00:02.0 enP19637s1: Link up Jan 28 00:47:34.176657 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 00:47:34.200943 kernel: hv_netvsc 7ced8d87-be92-7ced-8d87-be927ced8d87 eth0: Data path switched to VF: enP19637s1 Jan 28 00:47:34.201753 systemd-networkd[1473]: enP19637s1: Link UP Jan 28 00:47:34.201930 systemd-networkd[1473]: eth0: Link UP Jan 28 00:47:34.201933 systemd-networkd[1473]: eth0: Gained carrier Jan 28 00:47:34.201952 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:47:34.203648 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 00:47:34.209542 systemd-networkd[1473]: enP19637s1: Gained carrier Jan 28 00:47:34.217991 systemd-networkd[1473]: eth0: DHCPv4 address 10.200.20.30/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 00:47:34.540930 kernel: loop3: detected capacity change from 0 to 207008 Jan 28 00:47:34.588940 kernel: loop4: detected capacity change from 0 to 27936 Jan 28 00:47:34.608939 kernel: loop5: detected capacity change from 0 to 100632 Jan 28 00:47:34.623928 kernel: loop6: detected capacity change from 0 to 119840 Jan 28 00:47:34.639984 kernel: loop7: detected capacity change from 0 to 207008 Jan 28 00:47:34.641400 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:47:34.654897 (sd-merge)[1608]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 28 00:47:34.655983 (sd-merge)[1608]: Merged extensions into '/usr'. Jan 28 00:47:34.659143 systemd[1]: Reload requested from client PID 1440 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 00:47:34.659435 systemd[1]: Reloading... Jan 28 00:47:34.710941 zram_generator::config[1639]: No configuration found. Jan 28 00:47:34.893039 systemd[1]: Reloading finished in 232 ms. Jan 28 00:47:34.915095 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 00:47:34.929974 systemd[1]: Starting ensure-sysext.service... Jan 28 00:47:34.934126 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:47:34.961029 systemd[1]: Reload requested from client PID 1694 ('systemctl') (unit ensure-sysext.service)... Jan 28 00:47:34.961043 systemd[1]: Reloading... Jan 28 00:47:34.976476 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 00:47:34.977100 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 00:47:34.977340 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 00:47:34.977474 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 00:47:34.977899 systemd-tmpfiles[1695]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 00:47:34.978720 systemd-tmpfiles[1695]: ACLs are not supported, ignoring. Jan 28 00:47:34.978758 systemd-tmpfiles[1695]: ACLs are not supported, ignoring. Jan 28 00:47:35.013041 systemd-tmpfiles[1695]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:47:35.013187 systemd-tmpfiles[1695]: Skipping /boot Jan 28 00:47:35.019611 systemd-tmpfiles[1695]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:47:35.019729 systemd-tmpfiles[1695]: Skipping /boot Jan 28 00:47:35.029955 zram_generator::config[1732]: No configuration found. Jan 28 00:47:35.184281 systemd[1]: Reloading finished in 223 ms. Jan 28 00:47:35.205529 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:47:35.226478 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 00:47:35.237147 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 00:47:35.258464 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 00:47:35.267108 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:47:35.275118 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 00:47:35.282185 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:47:35.284022 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:47:35.291803 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:47:35.304223 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:47:35.310367 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:47:35.310481 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:47:35.313595 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:47:35.313757 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:47:35.319859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:47:35.325418 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:47:35.331709 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:47:35.331844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:47:35.342015 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:47:35.343116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:47:35.365172 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:47:35.379143 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:47:35.384488 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:47:35.384736 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:47:35.386598 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:47:35.386749 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:47:35.393085 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:47:35.393222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:47:35.399446 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 00:47:35.405719 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:47:35.405850 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:47:35.415288 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 00:47:35.421715 systemd-resolved[1787]: Positive Trust Anchors: Jan 28 00:47:35.421732 systemd-resolved[1787]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:47:35.421752 systemd-resolved[1787]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:47:35.430606 systemd[1]: Finished ensure-sysext.service. Jan 28 00:47:35.435253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:47:35.436344 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:47:35.442252 systemd-resolved[1787]: Using system hostname 'ci-4459.2.3-n-ee3b3e4916'. Jan 28 00:47:35.446477 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:47:35.454055 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:47:35.464632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:47:35.471317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:47:35.471356 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:47:35.471392 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 00:47:35.476233 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:47:35.481485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:47:35.481679 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:47:35.487480 augenrules[1827]: No rules Jan 28 00:47:35.488091 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:47:35.488249 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:47:35.493452 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:47:35.493617 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 00:47:35.498556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:47:35.498685 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:47:35.505284 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:47:35.505444 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:47:35.513182 systemd[1]: Reached target network.target - Network. Jan 28 00:47:35.517209 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:47:35.522742 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:47:35.522821 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:47:35.624039 systemd-networkd[1473]: eth0: Gained IPv6LL Jan 28 00:47:35.626312 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 00:47:35.631876 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 00:47:36.339279 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 00:47:36.344808 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 00:47:43.001680 ldconfig[1435]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 00:47:43.021216 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 00:47:43.028254 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 00:47:43.046004 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 00:47:43.051375 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:47:43.056467 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 00:47:43.063122 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 00:47:43.070372 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 00:47:43.076073 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 00:47:43.082776 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 00:47:43.089142 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 00:47:43.089179 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:47:43.093978 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:47:43.316383 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 00:47:43.322631 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 00:47:43.328755 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 00:47:43.334825 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 00:47:43.341255 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 00:47:43.348033 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 00:47:43.353097 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 00:47:43.359836 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 00:47:43.364731 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:47:43.369597 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:47:43.374226 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:47:43.374255 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:47:43.605575 systemd[1]: Starting chronyd.service - NTP client/server... Jan 28 00:47:43.618020 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 00:47:43.626169 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 00:47:43.635111 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 00:47:43.649185 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 00:47:43.655662 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 00:47:43.667760 jq[1852]: false Jan 28 00:47:43.662761 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 00:47:43.669340 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 00:47:43.671029 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 28 00:47:43.676296 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 28 00:47:43.681688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:47:43.681288 KVP[1854]: KVP starting; pid is:1854 Jan 28 00:47:43.687562 chronyd[1844]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 28 00:47:43.689583 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 00:47:43.697194 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 00:47:43.713057 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 00:47:43.718388 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 00:47:43.726094 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 00:47:43.733237 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 00:47:43.739770 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 00:47:43.743174 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 00:47:43.743726 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 00:47:43.750220 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 00:47:43.758290 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 00:47:43.758547 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 00:47:43.763802 jq[1871]: true Jan 28 00:47:43.768951 chronyd[1844]: Timezone right/UTC failed leap second check, ignoring Jan 28 00:47:43.769696 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 00:47:43.769646 chronyd[1844]: Loaded seccomp filter (level 2) Jan 28 00:47:43.775794 kernel: hv_utils: KVP IC version 4.0 Jan 28 00:47:43.775520 KVP[1854]: KVP LIC Version: 3.1 Jan 28 00:47:43.779350 systemd[1]: Started chronyd.service - NTP client/server. Jan 28 00:47:43.783745 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 00:47:43.787388 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 00:47:43.793712 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 00:47:43.793880 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 00:47:43.812179 (ntainerd)[1882]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 00:47:43.814611 jq[1881]: true Jan 28 00:47:44.130466 extend-filesystems[1853]: Found /dev/sda6 Jan 28 00:47:44.223982 extend-filesystems[1853]: Found /dev/sda9 Jan 28 00:47:44.227615 extend-filesystems[1853]: Checking size of /dev/sda9 Jan 28 00:47:44.227984 systemd-logind[1867]: New seat seat0. Jan 28 00:47:44.233244 systemd-logind[1867]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 00:47:44.233402 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 00:47:44.247490 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 00:47:44.247846 tar[1874]: linux-arm64/LICENSE Jan 28 00:47:44.248229 tar[1874]: linux-arm64/helm Jan 28 00:47:44.414091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:47:44.423973 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:47:44.431322 update_engine[1870]: I20260128 00:47:44.431245 1870 main.cc:92] Flatcar Update Engine starting Jan 28 00:47:44.660877 extend-filesystems[1853]: Old size kept for /dev/sda9 Jan 28 00:47:44.667220 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 00:47:44.681520 bash[1907]: Updated "/home/core/.ssh/authorized_keys" Jan 28 00:47:44.667691 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 00:47:44.685596 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 00:47:44.696949 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 28 00:47:44.783662 tar[1874]: linux-arm64/README.md Jan 28 00:47:44.811730 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 00:47:44.865844 kubelet[1930]: E0128 00:47:44.865792 1930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:47:44.868456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:47:44.868713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:47:44.869106 systemd[1]: kubelet.service: Consumed 549ms CPU time, 256.8M memory peak. Jan 28 00:47:45.354578 dbus-daemon[1847]: [system] SELinux support is enabled Jan 28 00:47:45.359273 update_engine[1870]: I20260128 00:47:45.358636 1870 update_check_scheduler.cc:74] Next update check in 6m58s Jan 28 00:47:45.354733 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 00:47:45.365233 sshd_keygen[1908]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 00:47:45.362701 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 00:47:45.362720 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 00:47:45.371070 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 00:47:45.371096 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 00:47:45.378589 systemd[1]: Started update-engine.service - Update Engine. Jan 28 00:47:45.378988 dbus-daemon[1847]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 00:47:45.387111 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 00:47:45.395952 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 00:47:45.405144 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 00:47:45.412115 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 28 00:47:45.417894 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 00:47:45.419999 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 00:47:45.429730 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 00:47:45.440088 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 28 00:47:45.447024 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 00:47:45.454429 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 00:47:45.460158 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 28 00:47:45.465396 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 00:47:45.818025 coreos-metadata[1846]: Jan 28 00:47:45.817 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 28 00:47:45.821734 coreos-metadata[1846]: Jan 28 00:47:45.821 INFO Fetch successful Jan 28 00:47:45.821734 coreos-metadata[1846]: Jan 28 00:47:45.821 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 28 00:47:45.826791 coreos-metadata[1846]: Jan 28 00:47:45.826 INFO Fetch successful Jan 28 00:47:45.826791 coreos-metadata[1846]: Jan 28 00:47:45.826 INFO Fetching http://168.63.129.16/machine/a3ef2ef3-1981-4d00-b56c-254f1543a998/c5ad97d6%2Df1fe%2D4f5c%2Db0a4%2Dd94f0931ee8d.%5Fci%2D4459.2.3%2Dn%2Dee3b3e4916?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 28 00:47:45.828880 coreos-metadata[1846]: Jan 28 00:47:45.828 INFO Fetch successful Jan 28 00:47:45.828880 coreos-metadata[1846]: Jan 28 00:47:45.828 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 28 00:47:45.840914 coreos-metadata[1846]: Jan 28 00:47:45.840 INFO Fetch successful Jan 28 00:47:45.862885 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 00:47:45.868246 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 00:47:46.464408 locksmithd[2006]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 00:47:46.855078 containerd[1882]: time="2026-01-28T00:47:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 00:47:46.855677 containerd[1882]: time="2026-01-28T00:47:46.855640308Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 28 00:47:46.861898 containerd[1882]: time="2026-01-28T00:47:46.860944668Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="40.408µs" Jan 28 00:47:46.861898 containerd[1882]: time="2026-01-28T00:47:46.861019092Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 00:47:46.861898 containerd[1882]: time="2026-01-28T00:47:46.861034484Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 00:47:46.861898 containerd[1882]: time="2026-01-28T00:47:46.861185804Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 00:47:46.861898 containerd[1882]: time="2026-01-28T00:47:46.861196876Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 00:47:46.861898 containerd[1882]: time="2026-01-28T00:47:46.861214660Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 00:47:46.861898 containerd[1882]: time="2026-01-28T00:47:46.861251716Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 00:47:46.861898 containerd[1882]: time="2026-01-28T00:47:46.861258820Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 00:47:46.861898 containerd[1882]: time="2026-01-28T00:47:46.861452516Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 00:47:46.861898 containerd[1882]: time="2026-01-28T00:47:46.861462652Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 00:47:46.861898 containerd[1882]: time="2026-01-28T00:47:46.861469628Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 00:47:46.861898 containerd[1882]: time="2026-01-28T00:47:46.861474636Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 00:47:46.862163 containerd[1882]: time="2026-01-28T00:47:46.861524772Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 00:47:46.862163 containerd[1882]: time="2026-01-28T00:47:46.861683332Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 00:47:46.862163 containerd[1882]: time="2026-01-28T00:47:46.861702820Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 00:47:46.862163 containerd[1882]: time="2026-01-28T00:47:46.861709180Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 00:47:46.862163 containerd[1882]: time="2026-01-28T00:47:46.861744852Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 00:47:46.862163 containerd[1882]: time="2026-01-28T00:47:46.862006916Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 00:47:46.862163 containerd[1882]: time="2026-01-28T00:47:46.862103732Z" level=info msg="metadata content store policy set" policy=shared Jan 28 00:47:46.878574 containerd[1882]: time="2026-01-28T00:47:46.878525044Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 00:47:46.878676 containerd[1882]: time="2026-01-28T00:47:46.878599292Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 00:47:46.878676 containerd[1882]: time="2026-01-28T00:47:46.878612452Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 00:47:46.878676 containerd[1882]: time="2026-01-28T00:47:46.878621124Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 00:47:46.878676 containerd[1882]: time="2026-01-28T00:47:46.878629508Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 00:47:46.878676 containerd[1882]: time="2026-01-28T00:47:46.878636476Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 00:47:46.878676 containerd[1882]: time="2026-01-28T00:47:46.878647060Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 00:47:46.878676 containerd[1882]: time="2026-01-28T00:47:46.878655804Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 00:47:46.878676 containerd[1882]: time="2026-01-28T00:47:46.878664084Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 00:47:46.878676 containerd[1882]: time="2026-01-28T00:47:46.878670548Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 00:47:46.878676 containerd[1882]: time="2026-01-28T00:47:46.878676316Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 00:47:46.878800 containerd[1882]: time="2026-01-28T00:47:46.878685796Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 00:47:46.878858 containerd[1882]: time="2026-01-28T00:47:46.878836284Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 00:47:46.878922 containerd[1882]: time="2026-01-28T00:47:46.878859820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 00:47:46.878922 containerd[1882]: time="2026-01-28T00:47:46.878870492Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 00:47:46.878922 containerd[1882]: time="2026-01-28T00:47:46.878878060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 00:47:46.878922 containerd[1882]: time="2026-01-28T00:47:46.878885060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 00:47:46.878922 containerd[1882]: time="2026-01-28T00:47:46.878891676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 00:47:46.878922 containerd[1882]: time="2026-01-28T00:47:46.878899276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 00:47:46.879105 containerd[1882]: time="2026-01-28T00:47:46.878906588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 00:47:46.879105 containerd[1882]: time="2026-01-28T00:47:46.878965956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 00:47:46.879105 containerd[1882]: time="2026-01-28T00:47:46.878972924Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 00:47:46.879105 containerd[1882]: time="2026-01-28T00:47:46.878987852Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 00:47:46.879105 containerd[1882]: time="2026-01-28T00:47:46.879042972Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 00:47:46.879105 containerd[1882]: time="2026-01-28T00:47:46.879053812Z" level=info msg="Start snapshots syncer" Jan 28 00:47:46.879105 containerd[1882]: time="2026-01-28T00:47:46.879073348Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 00:47:46.879504 containerd[1882]: time="2026-01-28T00:47:46.879465164Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 00:47:46.879587 containerd[1882]: time="2026-01-28T00:47:46.879521156Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 00:47:46.879587 containerd[1882]: time="2026-01-28T00:47:46.879577204Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 00:47:46.879928 containerd[1882]: time="2026-01-28T00:47:46.879879420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 00:47:46.880274 containerd[1882]: time="2026-01-28T00:47:46.880239884Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 00:47:46.880274 containerd[1882]: time="2026-01-28T00:47:46.880266220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 00:47:46.880274 containerd[1882]: time="2026-01-28T00:47:46.880276852Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 00:47:46.880339 containerd[1882]: time="2026-01-28T00:47:46.880289684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 00:47:46.880339 containerd[1882]: time="2026-01-28T00:47:46.880304988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 00:47:46.880339 containerd[1882]: time="2026-01-28T00:47:46.880315604Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 00:47:46.880393 containerd[1882]: time="2026-01-28T00:47:46.880339964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 00:47:46.880393 containerd[1882]: time="2026-01-28T00:47:46.880350756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 00:47:46.880393 containerd[1882]: time="2026-01-28T00:47:46.880361468Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 00:47:46.880425 containerd[1882]: time="2026-01-28T00:47:46.880392732Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 00:47:46.880425 containerd[1882]: time="2026-01-28T00:47:46.880404756Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 00:47:46.880425 containerd[1882]: time="2026-01-28T00:47:46.880414204Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 00:47:46.880425 containerd[1882]: time="2026-01-28T00:47:46.880422532Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 00:47:46.880471 containerd[1882]: time="2026-01-28T00:47:46.880428372Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 00:47:46.880471 containerd[1882]: time="2026-01-28T00:47:46.880437036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 00:47:46.880471 containerd[1882]: time="2026-01-28T00:47:46.880446508Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 00:47:46.880471 containerd[1882]: time="2026-01-28T00:47:46.880469748Z" level=info msg="runtime interface created" Jan 28 00:47:46.880545 containerd[1882]: time="2026-01-28T00:47:46.880473644Z" level=info msg="created NRI interface" Jan 28 00:47:46.880545 containerd[1882]: time="2026-01-28T00:47:46.880479900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 00:47:46.880545 containerd[1882]: time="2026-01-28T00:47:46.880492084Z" level=info msg="Connect containerd service" Jan 28 00:47:46.880545 containerd[1882]: time="2026-01-28T00:47:46.880510508Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 00:47:46.881438 containerd[1882]: time="2026-01-28T00:47:46.881413804Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:47:48.879458 containerd[1882]: time="2026-01-28T00:47:48.878928812Z" level=info msg="Start subscribing containerd event" Jan 28 00:47:48.879458 containerd[1882]: time="2026-01-28T00:47:48.878999756Z" level=info msg="Start recovering state" Jan 28 00:47:48.879458 containerd[1882]: time="2026-01-28T00:47:48.879084588Z" level=info msg="Start event monitor" Jan 28 00:47:48.879458 containerd[1882]: time="2026-01-28T00:47:48.879096588Z" level=info msg="Start cni network conf syncer for default" Jan 28 00:47:48.879458 containerd[1882]: time="2026-01-28T00:47:48.879101812Z" level=info msg="Start streaming server" Jan 28 00:47:48.879458 containerd[1882]: time="2026-01-28T00:47:48.879109852Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 00:47:48.879458 containerd[1882]: time="2026-01-28T00:47:48.879115092Z" level=info msg="runtime interface starting up..." Jan 28 00:47:48.879458 containerd[1882]: time="2026-01-28T00:47:48.879118748Z" level=info msg="starting plugins..." Jan 28 00:47:48.879458 containerd[1882]: time="2026-01-28T00:47:48.879129908Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 00:47:48.880090 containerd[1882]: time="2026-01-28T00:47:48.880059836Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 00:47:48.880198 containerd[1882]: time="2026-01-28T00:47:48.880185300Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 00:47:48.880340 containerd[1882]: time="2026-01-28T00:47:48.880316148Z" level=info msg="containerd successfully booted in 2.025672s" Jan 28 00:47:48.880513 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 00:47:48.886246 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 00:47:48.897996 systemd[1]: Startup finished in 1.642s (kernel) + 12.960s (initrd) + 20.270s (userspace) = 34.873s. Jan 28 00:47:52.014806 login[2023]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 28 00:47:52.649104 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 00:47:52.643887 login[2022]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:47:52.649842 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 00:47:52.655790 systemd-logind[1867]: New session 2 of user core. Jan 28 00:47:52.723192 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 00:47:52.725303 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 00:47:52.737513 (systemd)[2058]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 00:47:52.739845 systemd-logind[1867]: New session c1 of user core. Jan 28 00:47:52.936884 systemd[2058]: Queued start job for default target default.target. Jan 28 00:47:52.952791 systemd[2058]: Created slice app.slice - User Application Slice. Jan 28 00:47:52.952818 systemd[2058]: Reached target paths.target - Paths. Jan 28 00:47:52.952850 systemd[2058]: Reached target timers.target - Timers. Jan 28 00:47:52.953869 systemd[2058]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 00:47:52.962000 systemd[2058]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 00:47:52.962157 systemd[2058]: Reached target sockets.target - Sockets. Jan 28 00:47:52.962278 systemd[2058]: Reached target basic.target - Basic System. Jan 28 00:47:52.962378 systemd[2058]: Reached target default.target - Main User Target. Jan 28 00:47:52.962395 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 00:47:52.962496 systemd[2058]: Startup finished in 217ms. Jan 28 00:47:52.964374 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 00:47:53.016316 login[2023]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:47:53.020978 systemd-logind[1867]: New session 1 of user core. Jan 28 00:47:53.027255 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 00:47:53.954462 waagent[2019]: 2026-01-28T00:47:53.949319Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 28 00:47:53.955149 waagent[2019]: 2026-01-28T00:47:53.955095Z INFO Daemon Daemon OS: flatcar 4459.2.3 Jan 28 00:47:53.958871 waagent[2019]: 2026-01-28T00:47:53.958822Z INFO Daemon Daemon Python: 3.11.13 Jan 28 00:47:53.962933 waagent[2019]: 2026-01-28T00:47:53.962877Z INFO Daemon Daemon Run daemon Jan 28 00:47:53.967061 waagent[2019]: 2026-01-28T00:47:53.967021Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.3' Jan 28 00:47:53.974937 waagent[2019]: 2026-01-28T00:47:53.974872Z INFO Daemon Daemon Using waagent for provisioning Jan 28 00:47:53.979418 waagent[2019]: 2026-01-28T00:47:53.979370Z INFO Daemon Daemon Activate resource disk Jan 28 00:47:53.983781 waagent[2019]: 2026-01-28T00:47:53.983733Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 28 00:47:53.992742 waagent[2019]: 2026-01-28T00:47:53.992687Z INFO Daemon Daemon Found device: None Jan 28 00:47:53.996619 waagent[2019]: 2026-01-28T00:47:53.996568Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 28 00:47:54.004339 waagent[2019]: 2026-01-28T00:47:54.004285Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 28 00:47:54.014255 waagent[2019]: 2026-01-28T00:47:54.014206Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 00:47:54.019006 waagent[2019]: 2026-01-28T00:47:54.018960Z INFO Daemon Daemon Running default provisioning handler Jan 28 00:47:54.028252 waagent[2019]: 2026-01-28T00:47:54.028206Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 28 00:47:54.040139 waagent[2019]: 2026-01-28T00:47:54.040088Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 28 00:47:54.048635 waagent[2019]: 2026-01-28T00:47:54.048580Z INFO Daemon Daemon cloud-init is enabled: False Jan 28 00:47:54.053045 waagent[2019]: 2026-01-28T00:47:54.052998Z INFO Daemon Daemon Copying ovf-env.xml Jan 28 00:47:54.177090 waagent[2019]: 2026-01-28T00:47:54.177020Z INFO Daemon Daemon Successfully mounted dvd Jan 28 00:47:54.206248 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 28 00:47:54.212478 waagent[2019]: 2026-01-28T00:47:54.208610Z INFO Daemon Daemon Detect protocol endpoint Jan 28 00:47:54.212825 waagent[2019]: 2026-01-28T00:47:54.212777Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 28 00:47:54.217831 waagent[2019]: 2026-01-28T00:47:54.217784Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 28 00:47:54.223582 waagent[2019]: 2026-01-28T00:47:54.223537Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 28 00:47:54.228327 waagent[2019]: 2026-01-28T00:47:54.228281Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 28 00:47:54.233522 waagent[2019]: 2026-01-28T00:47:54.233479Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 28 00:47:54.279868 waagent[2019]: 2026-01-28T00:47:54.279823Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 28 00:47:54.285758 waagent[2019]: 2026-01-28T00:47:54.285723Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 28 00:47:54.290142 waagent[2019]: 2026-01-28T00:47:54.290101Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 28 00:47:54.441943 waagent[2019]: 2026-01-28T00:47:54.439892Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 28 00:47:54.445279 waagent[2019]: 2026-01-28T00:47:54.445219Z INFO Daemon Daemon Forcing an update of the goal state. Jan 28 00:47:54.453293 waagent[2019]: 2026-01-28T00:47:54.453246Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 00:47:54.471827 waagent[2019]: 2026-01-28T00:47:54.471737Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 28 00:47:54.476440 waagent[2019]: 2026-01-28T00:47:54.476403Z INFO Daemon Jan 28 00:47:54.478612 waagent[2019]: 2026-01-28T00:47:54.478582Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6d08d80b-3ec1-41fa-9d21-e40bde0580dd eTag: 2339156929664151187 source: Fabric] Jan 28 00:47:54.487092 waagent[2019]: 2026-01-28T00:47:54.487058Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 28 00:47:54.493142 waagent[2019]: 2026-01-28T00:47:54.493111Z INFO Daemon Jan 28 00:47:54.495235 waagent[2019]: 2026-01-28T00:47:54.495205Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 28 00:47:54.504473 waagent[2019]: 2026-01-28T00:47:54.504440Z INFO Daemon Daemon Downloading artifacts profile blob Jan 28 00:47:54.659374 waagent[2019]: 2026-01-28T00:47:54.659297Z INFO Daemon Downloaded certificate {'thumbprint': '88B2947EA1F9A09F81CD09730EA4B61A2ED523C7', 'hasPrivateKey': True} Jan 28 00:47:54.667468 waagent[2019]: 2026-01-28T00:47:54.667426Z INFO Daemon Fetch goal state completed Jan 28 00:47:54.710965 waagent[2019]: 2026-01-28T00:47:54.710890Z INFO Daemon Daemon Starting provisioning Jan 28 00:47:54.714869 waagent[2019]: 2026-01-28T00:47:54.714827Z INFO Daemon Daemon Handle ovf-env.xml. Jan 28 00:47:54.718547 waagent[2019]: 2026-01-28T00:47:54.718514Z INFO Daemon Daemon Set hostname [ci-4459.2.3-n-ee3b3e4916] Jan 28 00:47:54.725296 waagent[2019]: 2026-01-28T00:47:54.725245Z INFO Daemon Daemon Publish hostname [ci-4459.2.3-n-ee3b3e4916] Jan 28 00:47:54.730516 waagent[2019]: 2026-01-28T00:47:54.730475Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 28 00:47:54.735486 waagent[2019]: 2026-01-28T00:47:54.735451Z INFO Daemon Daemon Primary interface is [eth0] Jan 28 00:47:54.745491 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:47:54.745500 systemd-networkd[1473]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:47:54.745532 systemd-networkd[1473]: eth0: DHCP lease lost Jan 28 00:47:54.746591 waagent[2019]: 2026-01-28T00:47:54.746540Z INFO Daemon Daemon Create user account if not exists Jan 28 00:47:54.751295 waagent[2019]: 2026-01-28T00:47:54.751256Z INFO Daemon Daemon User core already exists, skip useradd Jan 28 00:47:54.755899 waagent[2019]: 2026-01-28T00:47:54.755864Z INFO Daemon Daemon Configure sudoer Jan 28 00:47:54.772960 systemd-networkd[1473]: eth0: DHCPv4 address 10.200.20.30/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 28 00:47:54.909654 waagent[2019]: 2026-01-28T00:47:54.909573Z INFO Daemon Daemon Configure sshd Jan 28 00:47:54.916538 waagent[2019]: 2026-01-28T00:47:54.916480Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 28 00:47:54.926665 waagent[2019]: 2026-01-28T00:47:54.926621Z INFO Daemon Daemon Deploy ssh public key. Jan 28 00:47:55.013940 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 00:47:55.017100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:47:55.113188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:47:55.116269 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:47:55.215180 kubelet[2117]: E0128 00:47:55.215126 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:47:55.217948 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:47:55.218063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:47:55.218512 systemd[1]: kubelet.service: Consumed 114ms CPU time, 107.2M memory peak. Jan 28 00:47:56.060427 waagent[2019]: 2026-01-28T00:47:56.060382Z INFO Daemon Daemon Provisioning complete Jan 28 00:47:56.075332 waagent[2019]: 2026-01-28T00:47:56.075289Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 28 00:47:56.080506 waagent[2019]: 2026-01-28T00:47:56.080470Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 28 00:47:56.088327 waagent[2019]: 2026-01-28T00:47:56.088294Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 28 00:47:56.187951 waagent[2124]: 2026-01-28T00:47:56.186973Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 28 00:47:56.187951 waagent[2124]: 2026-01-28T00:47:56.187102Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.3 Jan 28 00:47:56.187951 waagent[2124]: 2026-01-28T00:47:56.187138Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 28 00:47:56.187951 waagent[2124]: 2026-01-28T00:47:56.187170Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Jan 28 00:47:56.306097 waagent[2124]: 2026-01-28T00:47:56.306029Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 28 00:47:56.306422 waagent[2124]: 2026-01-28T00:47:56.306391Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 00:47:56.306554 waagent[2124]: 2026-01-28T00:47:56.306530Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 00:47:56.312799 waagent[2124]: 2026-01-28T00:47:56.312702Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 28 00:47:56.318089 waagent[2124]: 2026-01-28T00:47:56.318057Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 28 00:47:56.318561 waagent[2124]: 2026-01-28T00:47:56.318530Z INFO ExtHandler Jan 28 00:47:56.318693 waagent[2124]: 2026-01-28T00:47:56.318670Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: def15370-0e1b-4a05-97dc-60c0d4139d78 eTag: 2339156929664151187 source: Fabric] Jan 28 00:47:56.319024 waagent[2124]: 2026-01-28T00:47:56.318994Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 28 00:47:56.319530 waagent[2124]: 2026-01-28T00:47:56.319499Z INFO ExtHandler Jan 28 00:47:56.319647 waagent[2124]: 2026-01-28T00:47:56.319624Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 28 00:47:56.323373 waagent[2124]: 2026-01-28T00:47:56.323348Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 28 00:47:56.381987 waagent[2124]: 2026-01-28T00:47:56.381903Z INFO ExtHandler Downloaded certificate {'thumbprint': '88B2947EA1F9A09F81CD09730EA4B61A2ED523C7', 'hasPrivateKey': True} Jan 28 00:47:56.382546 waagent[2124]: 2026-01-28T00:47:56.382511Z INFO ExtHandler Fetch goal state completed Jan 28 00:47:56.396141 waagent[2124]: 2026-01-28T00:47:56.396081Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Jan 28 00:47:56.399687 waagent[2124]: 2026-01-28T00:47:56.399634Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2124 Jan 28 00:47:56.399795 waagent[2124]: 2026-01-28T00:47:56.399766Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 28 00:47:56.400095 waagent[2124]: 2026-01-28T00:47:56.400063Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 28 00:47:56.401234 waagent[2124]: 2026-01-28T00:47:56.401196Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] Jan 28 00:47:56.401554 waagent[2124]: 2026-01-28T00:47:56.401523Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 28 00:47:56.401673 waagent[2124]: 2026-01-28T00:47:56.401649Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 28 00:47:56.402137 waagent[2124]: 2026-01-28T00:47:56.402104Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 28 00:47:56.638648 waagent[2124]: 2026-01-28T00:47:56.638554Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 28 00:47:56.638775 waagent[2124]: 2026-01-28T00:47:56.638744Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 28 00:47:56.643378 waagent[2124]: 2026-01-28T00:47:56.643340Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 28 00:47:56.647852 systemd[1]: Reload requested from client PID 2141 ('systemctl') (unit waagent.service)... Jan 28 00:47:56.647868 systemd[1]: Reloading... Jan 28 00:47:56.714979 zram_generator::config[2186]: No configuration found. Jan 28 00:47:56.859293 systemd[1]: Reloading finished in 211 ms. Jan 28 00:47:56.871719 waagent[2124]: 2026-01-28T00:47:56.871593Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 28 00:47:56.871815 waagent[2124]: 2026-01-28T00:47:56.871740Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 28 00:47:58.118762 waagent[2124]: 2026-01-28T00:47:58.118685Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 28 00:47:58.119118 waagent[2124]: 2026-01-28T00:47:58.119028Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 28 00:47:58.119775 waagent[2124]: 2026-01-28T00:47:58.119685Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 28 00:47:58.120046 waagent[2124]: 2026-01-28T00:47:58.120010Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 28 00:47:58.120803 waagent[2124]: 2026-01-28T00:47:58.120220Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 00:47:58.120803 waagent[2124]: 2026-01-28T00:47:58.120290Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 00:47:58.120803 waagent[2124]: 2026-01-28T00:47:58.120450Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 28 00:47:58.120803 waagent[2124]: 2026-01-28T00:47:58.120579Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 28 00:47:58.120803 waagent[2124]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 28 00:47:58.120803 waagent[2124]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 28 00:47:58.120803 waagent[2124]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 28 00:47:58.120803 waagent[2124]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 28 00:47:58.120803 waagent[2124]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 00:47:58.120803 waagent[2124]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 28 00:47:58.121169 waagent[2124]: 2026-01-28T00:47:58.121129Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 28 00:47:58.121227 waagent[2124]: 2026-01-28T00:47:58.121187Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 28 00:47:58.121396 waagent[2124]: 2026-01-28T00:47:58.121366Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 28 00:47:58.121613 waagent[2124]: 2026-01-28T00:47:58.121583Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 28 00:47:58.121831 waagent[2124]: 2026-01-28T00:47:58.121799Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 28 00:47:58.121888 waagent[2124]: 2026-01-28T00:47:58.121839Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 28 00:47:58.122439 waagent[2124]: 2026-01-28T00:47:58.122399Z INFO EnvHandler ExtHandler Configure routes Jan 28 00:47:58.122548 waagent[2124]: 2026-01-28T00:47:58.122525Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 28 00:47:58.123109 waagent[2124]: 2026-01-28T00:47:58.123090Z INFO EnvHandler ExtHandler Gateway:None Jan 28 00:47:58.123233 waagent[2124]: 2026-01-28T00:47:58.123210Z INFO EnvHandler ExtHandler Routes:None Jan 28 00:47:58.129238 waagent[2124]: 2026-01-28T00:47:58.129185Z INFO ExtHandler ExtHandler Jan 28 00:47:58.129312 waagent[2124]: 2026-01-28T00:47:58.129272Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 955cd500-47e6-455b-8e8e-f119825b3c65 correlation d62ae9db-54bb-4edc-80e6-bdc54898871d created: 2026-01-28T00:46:40.197144Z] Jan 28 00:47:58.129610 waagent[2124]: 2026-01-28T00:47:58.129567Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 28 00:47:58.130061 waagent[2124]: 2026-01-28T00:47:58.130028Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 28 00:47:58.377979 waagent[2124]: 2026-01-28T00:47:58.377669Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 28 00:47:58.377979 waagent[2124]: Try `iptables -h' or 'iptables --help' for more information.) Jan 28 00:47:58.378336 waagent[2124]: 2026-01-28T00:47:58.378304Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 99767DA4-DA11-4299-9064-7C689690D240;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 28 00:47:58.430837 waagent[2124]: 2026-01-28T00:47:58.430771Z INFO MonitorHandler ExtHandler Network interfaces: Jan 28 00:47:58.430837 waagent[2124]: Executing ['ip', '-a', '-o', 'link']: Jan 28 00:47:58.430837 waagent[2124]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 28 00:47:58.430837 waagent[2124]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:87:be:92 brd ff:ff:ff:ff:ff:ff Jan 28 00:47:58.430837 waagent[2124]: 3: enP19637s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:87:be:92 brd ff:ff:ff:ff:ff:ff\ altname enP19637p0s2 Jan 28 00:47:58.430837 waagent[2124]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 28 00:47:58.430837 waagent[2124]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 28 00:47:58.430837 waagent[2124]: 2: eth0 inet 10.200.20.30/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 28 00:47:58.430837 waagent[2124]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 28 00:47:58.430837 waagent[2124]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 28 00:47:58.430837 waagent[2124]: 2: eth0 inet6 fe80::7eed:8dff:fe87:be92/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 28 00:47:58.722793 waagent[2124]: 2026-01-28T00:47:58.722090Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 28 00:47:58.722793 waagent[2124]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:47:58.722793 waagent[2124]: pkts bytes target prot opt in out source destination Jan 28 00:47:58.722793 waagent[2124]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:47:58.722793 waagent[2124]: pkts bytes target prot opt in out source destination Jan 28 00:47:58.722793 waagent[2124]: Chain OUTPUT (policy ACCEPT 2 packets, 112 bytes) Jan 28 00:47:58.722793 waagent[2124]: pkts bytes target prot opt in out source destination Jan 28 00:47:58.722793 waagent[2124]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 00:47:58.722793 waagent[2124]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 00:47:58.722793 waagent[2124]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 00:47:58.724392 waagent[2124]: 2026-01-28T00:47:58.724360Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 28 00:47:58.724392 waagent[2124]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:47:58.724392 waagent[2124]: pkts bytes target prot opt in out source destination Jan 28 00:47:58.724392 waagent[2124]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 28 00:47:58.724392 waagent[2124]: pkts bytes target prot opt in out source destination Jan 28 00:47:58.724392 waagent[2124]: Chain OUTPUT (policy ACCEPT 2 packets, 112 bytes) Jan 28 00:47:58.724392 waagent[2124]: pkts bytes target prot opt in out source destination Jan 28 00:47:58.724392 waagent[2124]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 28 00:47:58.724392 waagent[2124]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 28 00:47:58.724392 waagent[2124]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 28 00:47:58.724949 waagent[2124]: 2026-01-28T00:47:58.724924Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 28 00:48:05.264096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 00:48:05.265378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:05.375486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:05.384411 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:48:05.515519 kubelet[2275]: E0128 00:48:05.515395 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:48:05.517614 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:48:05.517730 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:48:05.518044 systemd[1]: kubelet.service: Consumed 112ms CPU time, 107.2M memory peak. Jan 28 00:48:07.564248 chronyd[1844]: Selected source PHC0 Jan 28 00:48:15.764381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 00:48:15.766063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:15.865569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:15.874381 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:48:15.899495 kubelet[2290]: E0128 00:48:15.899442 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:48:15.901607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:48:15.901721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:48:15.903999 systemd[1]: kubelet.service: Consumed 107ms CPU time, 105M memory peak. Jan 28 00:48:22.013604 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 28 00:48:26.014329 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 00:48:26.015711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:26.375890 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 00:48:26.378130 systemd[1]: Started sshd@0-10.200.20.30:22-10.200.16.10:55670.service - OpenSSH per-connection server daemon (10.200.16.10:55670). Jan 28 00:48:26.383163 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:26.387196 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:48:26.414528 kubelet[2307]: E0128 00:48:26.414475 2307 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:48:26.416732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:48:26.416854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:48:26.417424 systemd[1]: kubelet.service: Consumed 112ms CPU time, 105M memory peak. Jan 28 00:48:27.051056 sshd[2305]: Accepted publickey for core from 10.200.16.10 port 55670 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:48:27.052260 sshd-session[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:48:27.057421 systemd-logind[1867]: New session 3 of user core. Jan 28 00:48:27.064118 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 00:48:27.459019 systemd[1]: Started sshd@1-10.200.20.30:22-10.200.16.10:55680.service - OpenSSH per-connection server daemon (10.200.16.10:55680). Jan 28 00:48:27.909847 sshd[2319]: Accepted publickey for core from 10.200.16.10 port 55680 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:48:27.911012 sshd-session[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:48:27.914646 systemd-logind[1867]: New session 4 of user core. Jan 28 00:48:27.924223 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 00:48:28.241070 sshd[2322]: Connection closed by 10.200.16.10 port 55680 Jan 28 00:48:28.241812 sshd-session[2319]: pam_unix(sshd:session): session closed for user core Jan 28 00:48:28.244319 systemd[1]: sshd@1-10.200.20.30:22-10.200.16.10:55680.service: Deactivated successfully. Jan 28 00:48:28.245744 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 00:48:28.247538 systemd-logind[1867]: Session 4 logged out. Waiting for processes to exit. Jan 28 00:48:28.248419 systemd-logind[1867]: Removed session 4. Jan 28 00:48:28.325740 systemd[1]: Started sshd@2-10.200.20.30:22-10.200.16.10:55692.service - OpenSSH per-connection server daemon (10.200.16.10:55692). Jan 28 00:48:28.779193 sshd[2328]: Accepted publickey for core from 10.200.16.10 port 55692 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:48:28.780329 sshd-session[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:48:28.783879 systemd-logind[1867]: New session 5 of user core. Jan 28 00:48:28.795079 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 00:48:29.108306 sshd[2331]: Connection closed by 10.200.16.10 port 55692 Jan 28 00:48:29.108010 sshd-session[2328]: pam_unix(sshd:session): session closed for user core Jan 28 00:48:29.112631 systemd[1]: sshd@2-10.200.20.30:22-10.200.16.10:55692.service: Deactivated successfully. Jan 28 00:48:29.114550 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 00:48:29.115655 systemd-logind[1867]: Session 5 logged out. Waiting for processes to exit. Jan 28 00:48:29.117023 systemd-logind[1867]: Removed session 5. Jan 28 00:48:29.197163 systemd[1]: Started sshd@3-10.200.20.30:22-10.200.16.10:55696.service - OpenSSH per-connection server daemon (10.200.16.10:55696). Jan 28 00:48:29.688158 sshd[2337]: Accepted publickey for core from 10.200.16.10 port 55696 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:48:29.689299 sshd-session[2337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:48:29.693139 systemd-logind[1867]: New session 6 of user core. Jan 28 00:48:29.703273 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 00:48:30.038713 sshd[2340]: Connection closed by 10.200.16.10 port 55696 Jan 28 00:48:30.039357 sshd-session[2337]: pam_unix(sshd:session): session closed for user core Jan 28 00:48:30.042897 systemd[1]: sshd@3-10.200.20.30:22-10.200.16.10:55696.service: Deactivated successfully. Jan 28 00:48:30.044650 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 00:48:30.045478 systemd-logind[1867]: Session 6 logged out. Waiting for processes to exit. Jan 28 00:48:30.046804 systemd-logind[1867]: Removed session 6. Jan 28 00:48:30.121152 systemd[1]: Started sshd@4-10.200.20.30:22-10.200.16.10:39002.service - OpenSSH per-connection server daemon (10.200.16.10:39002). Jan 28 00:48:30.589674 sshd[2346]: Accepted publickey for core from 10.200.16.10 port 39002 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:48:30.590442 sshd-session[2346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:48:30.593986 systemd-logind[1867]: New session 7 of user core. Jan 28 00:48:30.601052 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 00:48:30.965565 sudo[2350]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 00:48:30.965792 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:48:31.025496 update_engine[1870]: I20260128 00:48:31.024989 1870 update_attempter.cc:509] Updating boot flags... Jan 28 00:48:31.083393 sudo[2350]: pam_unix(sudo:session): session closed for user root Jan 28 00:48:31.155140 sshd[2349]: Connection closed by 10.200.16.10 port 39002 Jan 28 00:48:31.155611 sshd-session[2346]: pam_unix(sshd:session): session closed for user core Jan 28 00:48:31.159546 systemd-logind[1867]: Session 7 logged out. Waiting for processes to exit. Jan 28 00:48:31.160079 systemd[1]: sshd@4-10.200.20.30:22-10.200.16.10:39002.service: Deactivated successfully. Jan 28 00:48:31.162240 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 00:48:31.163703 systemd-logind[1867]: Removed session 7. Jan 28 00:48:31.248070 systemd[1]: Started sshd@5-10.200.20.30:22-10.200.16.10:39010.service - OpenSSH per-connection server daemon (10.200.16.10:39010). Jan 28 00:48:31.742138 sshd[2420]: Accepted publickey for core from 10.200.16.10 port 39010 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:48:31.743254 sshd-session[2420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:48:31.747399 systemd-logind[1867]: New session 8 of user core. Jan 28 00:48:31.754084 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 00:48:32.015650 sudo[2425]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 00:48:32.016242 sudo[2425]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:48:32.024346 sudo[2425]: pam_unix(sudo:session): session closed for user root Jan 28 00:48:32.028875 sudo[2424]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 00:48:32.029191 sudo[2424]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:48:32.036337 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 00:48:32.064832 augenrules[2447]: No rules Jan 28 00:48:32.066004 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:48:32.067952 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 00:48:32.069560 sudo[2424]: pam_unix(sudo:session): session closed for user root Jan 28 00:48:32.150745 sshd[2423]: Connection closed by 10.200.16.10 port 39010 Jan 28 00:48:32.150646 sshd-session[2420]: pam_unix(sshd:session): session closed for user core Jan 28 00:48:32.154731 systemd[1]: sshd@5-10.200.20.30:22-10.200.16.10:39010.service: Deactivated successfully. Jan 28 00:48:32.157413 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 00:48:32.159371 systemd-logind[1867]: Session 8 logged out. Waiting for processes to exit. Jan 28 00:48:32.160672 systemd-logind[1867]: Removed session 8. Jan 28 00:48:32.248895 systemd[1]: Started sshd@6-10.200.20.30:22-10.200.16.10:39018.service - OpenSSH per-connection server daemon (10.200.16.10:39018). Jan 28 00:48:32.743940 sshd[2456]: Accepted publickey for core from 10.200.16.10 port 39018 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:48:32.744824 sshd-session[2456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:48:32.748489 systemd-logind[1867]: New session 9 of user core. Jan 28 00:48:32.760305 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 00:48:33.019107 sudo[2460]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 00:48:33.019320 sudo[2460]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:48:34.631058 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 00:48:34.652247 (dockerd)[2478]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 00:48:35.759556 dockerd[2478]: time="2026-01-28T00:48:35.759497268Z" level=info msg="Starting up" Jan 28 00:48:35.760215 dockerd[2478]: time="2026-01-28T00:48:35.760188278Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 00:48:35.768880 dockerd[2478]: time="2026-01-28T00:48:35.768800947Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 00:48:35.917747 dockerd[2478]: time="2026-01-28T00:48:35.917542094Z" level=info msg="Loading containers: start." Jan 28 00:48:35.931141 kernel: Initializing XFRM netlink socket Jan 28 00:48:36.305108 systemd-networkd[1473]: docker0: Link UP Jan 28 00:48:36.323362 dockerd[2478]: time="2026-01-28T00:48:36.323256202Z" level=info msg="Loading containers: done." Jan 28 00:48:36.334857 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck125516402-merged.mount: Deactivated successfully. Jan 28 00:48:36.348665 dockerd[2478]: time="2026-01-28T00:48:36.348611536Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 00:48:36.348826 dockerd[2478]: time="2026-01-28T00:48:36.348709451Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 00:48:36.348826 dockerd[2478]: time="2026-01-28T00:48:36.348806918Z" level=info msg="Initializing buildkit" Jan 28 00:48:36.399864 dockerd[2478]: time="2026-01-28T00:48:36.399818436Z" level=info msg="Completed buildkit initialization" Jan 28 00:48:36.404564 dockerd[2478]: time="2026-01-28T00:48:36.404457782Z" level=info msg="Daemon has completed initialization" Jan 28 00:48:36.404564 dockerd[2478]: time="2026-01-28T00:48:36.404507512Z" level=info msg="API listen on /run/docker.sock" Jan 28 00:48:36.404853 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 00:48:36.513934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 28 00:48:36.517107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:36.667843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:36.674161 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:48:36.701898 kubelet[2692]: E0128 00:48:36.701830 2692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:48:36.704487 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:48:36.704597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:48:36.705117 systemd[1]: kubelet.service: Consumed 110ms CPU time, 106.8M memory peak. Jan 28 00:48:37.215706 containerd[1882]: time="2026-01-28T00:48:37.215663316Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 00:48:38.001745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount704676079.mount: Deactivated successfully. Jan 28 00:48:39.471959 containerd[1882]: time="2026-01-28T00:48:39.471584208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:39.475211 containerd[1882]: time="2026-01-28T00:48:39.475016903Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 28 00:48:39.480986 containerd[1882]: time="2026-01-28T00:48:39.480963009Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:39.485977 containerd[1882]: time="2026-01-28T00:48:39.485942821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:39.486662 containerd[1882]: time="2026-01-28T00:48:39.486518943Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.270819722s" Jan 28 00:48:39.486662 containerd[1882]: time="2026-01-28T00:48:39.486550432Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 28 00:48:39.487356 containerd[1882]: time="2026-01-28T00:48:39.487329487Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 00:48:41.025863 containerd[1882]: time="2026-01-28T00:48:41.025811139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:41.029324 containerd[1882]: time="2026-01-28T00:48:41.029290002Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 28 00:48:41.034500 containerd[1882]: time="2026-01-28T00:48:41.034451367Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:41.039804 containerd[1882]: time="2026-01-28T00:48:41.039163082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:41.039804 containerd[1882]: time="2026-01-28T00:48:41.039689462Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.552331773s" Jan 28 00:48:41.039804 containerd[1882]: time="2026-01-28T00:48:41.039718695Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 28 00:48:41.040542 containerd[1882]: time="2026-01-28T00:48:41.040509804Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 00:48:42.492268 containerd[1882]: time="2026-01-28T00:48:42.492189605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:42.495735 containerd[1882]: time="2026-01-28T00:48:42.495701485Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 28 00:48:42.499013 containerd[1882]: time="2026-01-28T00:48:42.498970116Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:42.505936 containerd[1882]: time="2026-01-28T00:48:42.505857928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:42.506393 containerd[1882]: time="2026-01-28T00:48:42.506369098Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.465824749s" Jan 28 00:48:42.506474 containerd[1882]: time="2026-01-28T00:48:42.506461894Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 28 00:48:42.506932 containerd[1882]: time="2026-01-28T00:48:42.506896685Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 00:48:43.936888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4045700960.mount: Deactivated successfully. Jan 28 00:48:44.413416 containerd[1882]: time="2026-01-28T00:48:44.413368179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:44.416414 containerd[1882]: time="2026-01-28T00:48:44.416378537Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 28 00:48:44.419685 containerd[1882]: time="2026-01-28T00:48:44.419647640Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:44.424948 containerd[1882]: time="2026-01-28T00:48:44.424383653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:44.424948 containerd[1882]: time="2026-01-28T00:48:44.424763603Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.917760041s" Jan 28 00:48:44.424948 containerd[1882]: time="2026-01-28T00:48:44.424785355Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 28 00:48:44.425427 containerd[1882]: time="2026-01-28T00:48:44.425294710Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 00:48:45.061208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount869727511.mount: Deactivated successfully. Jan 28 00:48:45.973259 containerd[1882]: time="2026-01-28T00:48:45.972621666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:45.976149 containerd[1882]: time="2026-01-28T00:48:45.976114417Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 28 00:48:45.982820 containerd[1882]: time="2026-01-28T00:48:45.982139381Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:45.986607 containerd[1882]: time="2026-01-28T00:48:45.986558862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:45.987369 containerd[1882]: time="2026-01-28T00:48:45.987069449Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.561746977s" Jan 28 00:48:45.987369 containerd[1882]: time="2026-01-28T00:48:45.987101482Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 28 00:48:45.987798 containerd[1882]: time="2026-01-28T00:48:45.987773490Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 00:48:46.575809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount226926688.mount: Deactivated successfully. Jan 28 00:48:46.597479 containerd[1882]: time="2026-01-28T00:48:46.597424646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:48:46.600921 containerd[1882]: time="2026-01-28T00:48:46.600873138Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 28 00:48:46.604927 containerd[1882]: time="2026-01-28T00:48:46.604750420Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:48:46.609146 containerd[1882]: time="2026-01-28T00:48:46.609115678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:48:46.609622 containerd[1882]: time="2026-01-28T00:48:46.609470385Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 621.666326ms" Jan 28 00:48:46.609622 containerd[1882]: time="2026-01-28T00:48:46.609498202Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 28 00:48:46.609966 containerd[1882]: time="2026-01-28T00:48:46.609944208Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 00:48:46.763972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 28 00:48:46.765358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:46.871898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:46.883200 (kubelet)[2838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:48:46.908285 kubelet[2838]: E0128 00:48:46.908204 2838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:48:46.910427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:48:46.910678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:48:46.911234 systemd[1]: kubelet.service: Consumed 107ms CPU time, 106.9M memory peak. Jan 28 00:48:48.207129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3048926173.mount: Deactivated successfully. Jan 28 00:48:51.659955 containerd[1882]: time="2026-01-28T00:48:51.659647344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:51.663926 containerd[1882]: time="2026-01-28T00:48:51.663769291Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 28 00:48:51.667481 containerd[1882]: time="2026-01-28T00:48:51.667423703Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:51.672165 containerd[1882]: time="2026-01-28T00:48:51.672091684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:48:51.672946 containerd[1882]: time="2026-01-28T00:48:51.672727440Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 5.062758343s" Jan 28 00:48:51.672946 containerd[1882]: time="2026-01-28T00:48:51.672757089Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 28 00:48:53.723182 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:53.723815 systemd[1]: kubelet.service: Consumed 107ms CPU time, 106.9M memory peak. Jan 28 00:48:53.734543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:53.744429 systemd[1]: Reload requested from client PID 2924 ('systemctl') (unit session-9.scope)... Jan 28 00:48:53.744437 systemd[1]: Reloading... Jan 28 00:48:53.822935 zram_generator::config[2970]: No configuration found. Jan 28 00:48:53.972801 systemd[1]: Reloading finished in 228 ms. Jan 28 00:48:54.019272 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 00:48:54.019499 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 00:48:54.019851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:54.021459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:54.214264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:54.223219 (kubelet)[3036]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:48:54.272834 kubelet[3036]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:48:54.272834 kubelet[3036]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:48:54.272834 kubelet[3036]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:48:54.272834 kubelet[3036]: I0128 00:48:54.272811 3036 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:48:54.493944 kubelet[3036]: I0128 00:48:54.492838 3036 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 00:48:54.493944 kubelet[3036]: I0128 00:48:54.492981 3036 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:48:54.493944 kubelet[3036]: I0128 00:48:54.493291 3036 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 00:48:54.510968 kubelet[3036]: E0128 00:48:54.510931 3036 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:48:54.511688 kubelet[3036]: I0128 00:48:54.511658 3036 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:48:54.516760 kubelet[3036]: I0128 00:48:54.516737 3036 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 00:48:54.519947 kubelet[3036]: I0128 00:48:54.519906 3036 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 00:48:54.520720 kubelet[3036]: I0128 00:48:54.520679 3036 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:48:54.520858 kubelet[3036]: I0128 00:48:54.520721 3036 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-n-ee3b3e4916","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:48:54.520958 kubelet[3036]: I0128 00:48:54.520868 3036 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:48:54.520958 kubelet[3036]: I0128 00:48:54.520875 3036 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 00:48:54.521041 kubelet[3036]: I0128 00:48:54.521022 3036 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:48:54.524264 kubelet[3036]: I0128 00:48:54.523704 3036 kubelet.go:446] "Attempting to sync node with API server" Jan 28 00:48:54.524264 kubelet[3036]: I0128 00:48:54.523734 3036 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:48:54.524264 kubelet[3036]: I0128 00:48:54.523757 3036 kubelet.go:352] "Adding apiserver pod source" Jan 28 00:48:54.524264 kubelet[3036]: I0128 00:48:54.523766 3036 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:48:54.529936 kubelet[3036]: I0128 00:48:54.528988 3036 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 28 00:48:54.529936 kubelet[3036]: I0128 00:48:54.529351 3036 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 00:48:54.529936 kubelet[3036]: W0128 00:48:54.529400 3036 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 00:48:54.529936 kubelet[3036]: I0128 00:48:54.529847 3036 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 00:48:54.529936 kubelet[3036]: I0128 00:48:54.529871 3036 server.go:1287] "Started kubelet" Jan 28 00:48:54.530095 kubelet[3036]: W0128 00:48:54.530057 3036 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.30:6443: connect: connection refused Jan 28 00:48:54.530126 kubelet[3036]: E0128 00:48:54.530110 3036 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:48:54.530186 kubelet[3036]: W0128 00:48:54.530163 3036 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-n-ee3b3e4916&limit=500&resourceVersion=0": dial tcp 10.200.20.30:6443: connect: connection refused Jan 28 00:48:54.530208 kubelet[3036]: E0128 00:48:54.530189 3036 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-n-ee3b3e4916&limit=500&resourceVersion=0\": dial tcp 10.200.20.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:48:54.534307 kubelet[3036]: I0128 00:48:54.534284 3036 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:48:54.534818 kubelet[3036]: I0128 00:48:54.534802 3036 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:48:54.534981 kubelet[3036]: I0128 00:48:54.534961 3036 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:48:54.535617 kubelet[3036]: I0128 00:48:54.535601 3036 server.go:479] "Adding debug handlers to kubelet server" Jan 28 00:48:54.537252 kubelet[3036]: I0128 00:48:54.537198 3036 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:48:54.537447 kubelet[3036]: I0128 00:48:54.537422 3036 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 00:48:54.537590 kubelet[3036]: I0128 00:48:54.537577 3036 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:48:54.537903 kubelet[3036]: E0128 00:48:54.537618 3036 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" Jan 28 00:48:54.538868 kubelet[3036]: I0128 00:48:54.538840 3036 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 00:48:54.538983 kubelet[3036]: I0128 00:48:54.538905 3036 reconciler.go:26] "Reconciler: start to sync state" Jan 28 00:48:54.540469 kubelet[3036]: W0128 00:48:54.540435 3036 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.30:6443: connect: connection refused Jan 28 00:48:54.540585 kubelet[3036]: E0128 00:48:54.540570 3036 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:48:54.540710 kubelet[3036]: E0128 00:48:54.540692 3036 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-ee3b3e4916?timeout=10s\": dial tcp 10.200.20.30:6443: connect: connection refused" interval="200ms" Jan 28 00:48:54.540886 kubelet[3036]: E0128 00:48:54.540802 3036 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.30:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.30:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.3-n-ee3b3e4916.188ebea880eb149f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.3-n-ee3b3e4916,UID:ci-4459.2.3-n-ee3b3e4916,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.3-n-ee3b3e4916,},FirstTimestamp:2026-01-28 00:48:54.529856671 +0000 UTC m=+0.304139077,LastTimestamp:2026-01-28 00:48:54.529856671 +0000 UTC m=+0.304139077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.3-n-ee3b3e4916,}" Jan 28 00:48:54.542863 kubelet[3036]: I0128 00:48:54.542842 3036 factory.go:221] Registration of the systemd container factory successfully Jan 28 00:48:54.542965 kubelet[3036]: I0128 00:48:54.542947 3036 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:48:54.544080 kubelet[3036]: I0128 00:48:54.544057 3036 factory.go:221] Registration of the containerd container factory successfully Jan 28 00:48:54.566040 kubelet[3036]: E0128 00:48:54.566015 3036 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:48:54.569413 kubelet[3036]: I0128 00:48:54.569392 3036 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:48:54.569698 kubelet[3036]: I0128 00:48:54.569544 3036 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:48:54.569698 kubelet[3036]: I0128 00:48:54.569564 3036 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:48:54.576673 kubelet[3036]: I0128 00:48:54.576649 3036 policy_none.go:49] "None policy: Start" Jan 28 00:48:54.577054 kubelet[3036]: I0128 00:48:54.576776 3036 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 00:48:54.577054 kubelet[3036]: I0128 00:48:54.576794 3036 state_mem.go:35] "Initializing new in-memory state store" Jan 28 00:48:54.583782 kubelet[3036]: I0128 00:48:54.583749 3036 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 00:48:54.584882 kubelet[3036]: I0128 00:48:54.584852 3036 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 00:48:54.584882 kubelet[3036]: I0128 00:48:54.584879 3036 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 00:48:54.585024 kubelet[3036]: I0128 00:48:54.584898 3036 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:48:54.585024 kubelet[3036]: I0128 00:48:54.584903 3036 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 00:48:54.585024 kubelet[3036]: E0128 00:48:54.584959 3036 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:48:54.588629 kubelet[3036]: W0128 00:48:54.588441 3036 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.30:6443: connect: connection refused Jan 28 00:48:54.588629 kubelet[3036]: E0128 00:48:54.588475 3036 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:48:54.591976 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 00:48:54.602429 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 00:48:54.605876 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 00:48:54.615783 kubelet[3036]: I0128 00:48:54.615744 3036 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 00:48:54.616271 kubelet[3036]: I0128 00:48:54.616255 3036 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:48:54.616330 kubelet[3036]: I0128 00:48:54.616267 3036 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:48:54.616610 kubelet[3036]: I0128 00:48:54.616589 3036 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:48:54.617850 kubelet[3036]: E0128 00:48:54.617333 3036 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:48:54.617850 kubelet[3036]: E0128 00:48:54.617383 3036 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.3-n-ee3b3e4916\" not found" Jan 28 00:48:54.694187 systemd[1]: Created slice kubepods-burstable-pod89bfd097b0968965794eb09d82021c70.slice - libcontainer container kubepods-burstable-pod89bfd097b0968965794eb09d82021c70.slice. Jan 28 00:48:54.705815 kubelet[3036]: E0128 00:48:54.705582 3036 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.708445 systemd[1]: Created slice kubepods-burstable-podd3c4d8d19fdc9a4a3e4d2fc228962253.slice - libcontainer container kubepods-burstable-podd3c4d8d19fdc9a4a3e4d2fc228962253.slice. Jan 28 00:48:54.715863 kubelet[3036]: E0128 00:48:54.715833 3036 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.717400 systemd[1]: Created slice kubepods-burstable-podce7db24260ca8510dc6e78698800d7cd.slice - libcontainer container kubepods-burstable-podce7db24260ca8510dc6e78698800d7cd.slice. Jan 28 00:48:54.719360 kubelet[3036]: E0128 00:48:54.719295 3036 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.719360 kubelet[3036]: I0128 00:48:54.719307 3036 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.719886 kubelet[3036]: E0128 00:48:54.719859 3036 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.30:6443/api/v1/nodes\": dial tcp 10.200.20.30:6443: connect: connection refused" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.740192 kubelet[3036]: I0128 00:48:54.740163 3036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce7db24260ca8510dc6e78698800d7cd-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-n-ee3b3e4916\" (UID: \"ce7db24260ca8510dc6e78698800d7cd\") " pod="kube-system/kube-scheduler-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.740192 kubelet[3036]: I0128 00:48:54.740195 3036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89bfd097b0968965794eb09d82021c70-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-n-ee3b3e4916\" (UID: \"89bfd097b0968965794eb09d82021c70\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.740294 kubelet[3036]: I0128 00:48:54.740209 3036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3c4d8d19fdc9a4a3e4d2fc228962253-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-ee3b3e4916\" (UID: \"d3c4d8d19fdc9a4a3e4d2fc228962253\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.740294 kubelet[3036]: I0128 00:48:54.740225 3036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3c4d8d19fdc9a4a3e4d2fc228962253-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-ee3b3e4916\" (UID: \"d3c4d8d19fdc9a4a3e4d2fc228962253\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.740294 kubelet[3036]: I0128 00:48:54.740236 3036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3c4d8d19fdc9a4a3e4d2fc228962253-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-n-ee3b3e4916\" (UID: \"d3c4d8d19fdc9a4a3e4d2fc228962253\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.740294 kubelet[3036]: I0128 00:48:54.740247 3036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3c4d8d19fdc9a4a3e4d2fc228962253-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-n-ee3b3e4916\" (UID: \"d3c4d8d19fdc9a4a3e4d2fc228962253\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.740294 kubelet[3036]: I0128 00:48:54.740257 3036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89bfd097b0968965794eb09d82021c70-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-n-ee3b3e4916\" (UID: \"89bfd097b0968965794eb09d82021c70\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.740401 kubelet[3036]: I0128 00:48:54.740266 3036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89bfd097b0968965794eb09d82021c70-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-n-ee3b3e4916\" (UID: \"89bfd097b0968965794eb09d82021c70\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.740401 kubelet[3036]: I0128 00:48:54.740276 3036 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d3c4d8d19fdc9a4a3e4d2fc228962253-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-n-ee3b3e4916\" (UID: \"d3c4d8d19fdc9a4a3e4d2fc228962253\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.741582 kubelet[3036]: E0128 00:48:54.741549 3036 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-ee3b3e4916?timeout=10s\": dial tcp 10.200.20.30:6443: connect: connection refused" interval="400ms" Jan 28 00:48:54.922375 kubelet[3036]: I0128 00:48:54.922270 3036 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:54.923012 kubelet[3036]: E0128 00:48:54.922571 3036 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.30:6443/api/v1/nodes\": dial tcp 10.200.20.30:6443: connect: connection refused" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:55.008039 containerd[1882]: time="2026-01-28T00:48:55.007994424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-n-ee3b3e4916,Uid:89bfd097b0968965794eb09d82021c70,Namespace:kube-system,Attempt:0,}" Jan 28 00:48:55.017940 containerd[1882]: time="2026-01-28T00:48:55.017740214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-n-ee3b3e4916,Uid:d3c4d8d19fdc9a4a3e4d2fc228962253,Namespace:kube-system,Attempt:0,}" Jan 28 00:48:55.020938 containerd[1882]: time="2026-01-28T00:48:55.020691868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-n-ee3b3e4916,Uid:ce7db24260ca8510dc6e78698800d7cd,Namespace:kube-system,Attempt:0,}" Jan 28 00:48:55.089275 containerd[1882]: time="2026-01-28T00:48:55.089229037Z" level=info msg="connecting to shim 64e1a03415578e46ad95f7873c63c5155505403d3c19761241b2adf610b18389" address="unix:///run/containerd/s/6cd4ddf27eb30681ffebdfc614ed296ee6ede5e45c24918a0e29e24e0c7cf879" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:48:55.108306 systemd[1]: Started cri-containerd-64e1a03415578e46ad95f7873c63c5155505403d3c19761241b2adf610b18389.scope - libcontainer container 64e1a03415578e46ad95f7873c63c5155505403d3c19761241b2adf610b18389. Jan 28 00:48:55.120488 containerd[1882]: time="2026-01-28T00:48:55.120379178Z" level=info msg="connecting to shim dadf4576c5a67ac4ca1628f613a2688ef747275be1f9ac33ef0df0796ffd9120" address="unix:///run/containerd/s/18e4e18fb6117b97ab727dbedfdc4e6b470e814070f3f574ad58512bbbbbd695" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:48:55.122395 containerd[1882]: time="2026-01-28T00:48:55.122359273Z" level=info msg="connecting to shim 9a8d607ec248bc774baf400792993d52f0d9dfa63aa1a848d1f508387f3381d0" address="unix:///run/containerd/s/e42aa4eaf0b94739eb7948055eda43b80ce48a36ae74c2cd83c431a658ce81e2" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:48:55.142588 kubelet[3036]: E0128 00:48:55.142554 3036 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-n-ee3b3e4916?timeout=10s\": dial tcp 10.200.20.30:6443: connect: connection refused" interval="800ms" Jan 28 00:48:55.149132 systemd[1]: Started cri-containerd-9a8d607ec248bc774baf400792993d52f0d9dfa63aa1a848d1f508387f3381d0.scope - libcontainer container 9a8d607ec248bc774baf400792993d52f0d9dfa63aa1a848d1f508387f3381d0. Jan 28 00:48:55.160106 systemd[1]: Started cri-containerd-dadf4576c5a67ac4ca1628f613a2688ef747275be1f9ac33ef0df0796ffd9120.scope - libcontainer container dadf4576c5a67ac4ca1628f613a2688ef747275be1f9ac33ef0df0796ffd9120. Jan 28 00:48:55.170083 containerd[1882]: time="2026-01-28T00:48:55.170046236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-n-ee3b3e4916,Uid:89bfd097b0968965794eb09d82021c70,Namespace:kube-system,Attempt:0,} returns sandbox id \"64e1a03415578e46ad95f7873c63c5155505403d3c19761241b2adf610b18389\"" Jan 28 00:48:55.175520 containerd[1882]: time="2026-01-28T00:48:55.174508826Z" level=info msg="CreateContainer within sandbox \"64e1a03415578e46ad95f7873c63c5155505403d3c19761241b2adf610b18389\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 00:48:55.204319 containerd[1882]: time="2026-01-28T00:48:55.204216313Z" level=info msg="Container 776747f1b2b52033cd41bf523acdc293b5c910c0ce38342f8afaf999217e217a: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:55.207443 containerd[1882]: time="2026-01-28T00:48:55.207412175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-n-ee3b3e4916,Uid:ce7db24260ca8510dc6e78698800d7cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"dadf4576c5a67ac4ca1628f613a2688ef747275be1f9ac33ef0df0796ffd9120\"" Jan 28 00:48:55.211369 containerd[1882]: time="2026-01-28T00:48:55.211340931Z" level=info msg="CreateContainer within sandbox \"dadf4576c5a67ac4ca1628f613a2688ef747275be1f9ac33ef0df0796ffd9120\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 00:48:55.223996 containerd[1882]: time="2026-01-28T00:48:55.223957460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-n-ee3b3e4916,Uid:d3c4d8d19fdc9a4a3e4d2fc228962253,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a8d607ec248bc774baf400792993d52f0d9dfa63aa1a848d1f508387f3381d0\"" Jan 28 00:48:55.224661 containerd[1882]: time="2026-01-28T00:48:55.224630849Z" level=info msg="CreateContainer within sandbox \"64e1a03415578e46ad95f7873c63c5155505403d3c19761241b2adf610b18389\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"776747f1b2b52033cd41bf523acdc293b5c910c0ce38342f8afaf999217e217a\"" Jan 28 00:48:55.225615 containerd[1882]: time="2026-01-28T00:48:55.225509437Z" level=info msg="StartContainer for \"776747f1b2b52033cd41bf523acdc293b5c910c0ce38342f8afaf999217e217a\"" Jan 28 00:48:55.227928 containerd[1882]: time="2026-01-28T00:48:55.227893665Z" level=info msg="CreateContainer within sandbox \"9a8d607ec248bc774baf400792993d52f0d9dfa63aa1a848d1f508387f3381d0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 00:48:55.228372 containerd[1882]: time="2026-01-28T00:48:55.228338871Z" level=info msg="connecting to shim 776747f1b2b52033cd41bf523acdc293b5c910c0ce38342f8afaf999217e217a" address="unix:///run/containerd/s/6cd4ddf27eb30681ffebdfc614ed296ee6ede5e45c24918a0e29e24e0c7cf879" protocol=ttrpc version=3 Jan 28 00:48:55.246903 containerd[1882]: time="2026-01-28T00:48:55.246867708Z" level=info msg="Container e4fa30d31fd936e88b9fe58a38355d8fabd75ee5a1f3e0b9f2edd7edd6a9c8c6: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:55.247078 systemd[1]: Started cri-containerd-776747f1b2b52033cd41bf523acdc293b5c910c0ce38342f8afaf999217e217a.scope - libcontainer container 776747f1b2b52033cd41bf523acdc293b5c910c0ce38342f8afaf999217e217a. Jan 28 00:48:55.269009 containerd[1882]: time="2026-01-28T00:48:55.268974346Z" level=info msg="Container 5c8c77f10916a5eff0e192b69ededee1f1c1eb9d37104f11fe4948d1891ea5a5: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:48:55.278439 containerd[1882]: time="2026-01-28T00:48:55.278332523Z" level=info msg="CreateContainer within sandbox \"dadf4576c5a67ac4ca1628f613a2688ef747275be1f9ac33ef0df0796ffd9120\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e4fa30d31fd936e88b9fe58a38355d8fabd75ee5a1f3e0b9f2edd7edd6a9c8c6\"" Jan 28 00:48:55.279624 containerd[1882]: time="2026-01-28T00:48:55.279573250Z" level=info msg="StartContainer for \"e4fa30d31fd936e88b9fe58a38355d8fabd75ee5a1f3e0b9f2edd7edd6a9c8c6\"" Jan 28 00:48:55.281002 containerd[1882]: time="2026-01-28T00:48:55.280941142Z" level=info msg="connecting to shim e4fa30d31fd936e88b9fe58a38355d8fabd75ee5a1f3e0b9f2edd7edd6a9c8c6" address="unix:///run/containerd/s/18e4e18fb6117b97ab727dbedfdc4e6b470e814070f3f574ad58512bbbbbd695" protocol=ttrpc version=3 Jan 28 00:48:55.295159 containerd[1882]: time="2026-01-28T00:48:55.295129320Z" level=info msg="StartContainer for \"776747f1b2b52033cd41bf523acdc293b5c910c0ce38342f8afaf999217e217a\" returns successfully" Jan 28 00:48:55.297266 containerd[1882]: time="2026-01-28T00:48:55.297237227Z" level=info msg="CreateContainer within sandbox \"9a8d607ec248bc774baf400792993d52f0d9dfa63aa1a848d1f508387f3381d0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5c8c77f10916a5eff0e192b69ededee1f1c1eb9d37104f11fe4948d1891ea5a5\"" Jan 28 00:48:55.297984 containerd[1882]: time="2026-01-28T00:48:55.297955922Z" level=info msg="StartContainer for \"5c8c77f10916a5eff0e192b69ededee1f1c1eb9d37104f11fe4948d1891ea5a5\"" Jan 28 00:48:55.299007 containerd[1882]: time="2026-01-28T00:48:55.298842726Z" level=info msg="connecting to shim 5c8c77f10916a5eff0e192b69ededee1f1c1eb9d37104f11fe4948d1891ea5a5" address="unix:///run/containerd/s/e42aa4eaf0b94739eb7948055eda43b80ce48a36ae74c2cd83c431a658ce81e2" protocol=ttrpc version=3 Jan 28 00:48:55.301181 systemd[1]: Started cri-containerd-e4fa30d31fd936e88b9fe58a38355d8fabd75ee5a1f3e0b9f2edd7edd6a9c8c6.scope - libcontainer container e4fa30d31fd936e88b9fe58a38355d8fabd75ee5a1f3e0b9f2edd7edd6a9c8c6. Jan 28 00:48:55.324889 systemd[1]: Started cri-containerd-5c8c77f10916a5eff0e192b69ededee1f1c1eb9d37104f11fe4948d1891ea5a5.scope - libcontainer container 5c8c77f10916a5eff0e192b69ededee1f1c1eb9d37104f11fe4948d1891ea5a5. Jan 28 00:48:55.326995 kubelet[3036]: I0128 00:48:55.326969 3036 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:55.327485 kubelet[3036]: E0128 00:48:55.327454 3036 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.30:6443/api/v1/nodes\": dial tcp 10.200.20.30:6443: connect: connection refused" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:55.346482 kubelet[3036]: W0128 00:48:55.346428 3036 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-n-ee3b3e4916&limit=500&resourceVersion=0": dial tcp 10.200.20.30:6443: connect: connection refused Jan 28 00:48:55.346633 kubelet[3036]: E0128 00:48:55.346488 3036 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-n-ee3b3e4916&limit=500&resourceVersion=0\": dial tcp 10.200.20.30:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:48:55.358678 containerd[1882]: time="2026-01-28T00:48:55.358640930Z" level=info msg="StartContainer for \"e4fa30d31fd936e88b9fe58a38355d8fabd75ee5a1f3e0b9f2edd7edd6a9c8c6\" returns successfully" Jan 28 00:48:55.384052 containerd[1882]: time="2026-01-28T00:48:55.384016319Z" level=info msg="StartContainer for \"5c8c77f10916a5eff0e192b69ededee1f1c1eb9d37104f11fe4948d1891ea5a5\" returns successfully" Jan 28 00:48:55.594312 kubelet[3036]: E0128 00:48:55.594285 3036 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:55.599485 kubelet[3036]: E0128 00:48:55.599454 3036 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:55.599796 kubelet[3036]: E0128 00:48:55.599770 3036 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:56.129754 kubelet[3036]: I0128 00:48:56.129720 3036 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:56.483761 kubelet[3036]: E0128 00:48:56.483647 3036 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.3-n-ee3b3e4916\" not found" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:56.750918 kubelet[3036]: I0128 00:48:56.750728 3036 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:56.750918 kubelet[3036]: E0128 00:48:56.750753 3036 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.3-n-ee3b3e4916\": node \"ci-4459.2.3-n-ee3b3e4916\" not found" Jan 28 00:48:56.755159 kubelet[3036]: E0128 00:48:56.755105 3036 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:56.756004 kubelet[3036]: E0128 00:48:56.755390 3036 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:56.863422 kubelet[3036]: E0128 00:48:56.863376 3036 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" Jan 28 00:48:56.964103 kubelet[3036]: E0128 00:48:56.964053 3036 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" Jan 28 00:48:57.064782 kubelet[3036]: E0128 00:48:57.064731 3036 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" Jan 28 00:48:57.165689 kubelet[3036]: E0128 00:48:57.165646 3036 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" Jan 28 00:48:57.266274 kubelet[3036]: E0128 00:48:57.266233 3036 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" Jan 28 00:48:57.367336 kubelet[3036]: E0128 00:48:57.367209 3036 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" Jan 28 00:48:57.468046 kubelet[3036]: E0128 00:48:57.468003 3036 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" Jan 28 00:48:57.568292 kubelet[3036]: E0128 00:48:57.568243 3036 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" Jan 28 00:48:57.668662 kubelet[3036]: E0128 00:48:57.668532 3036 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" Jan 28 00:48:57.769655 kubelet[3036]: E0128 00:48:57.769609 3036 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.3-n-ee3b3e4916\" not found" Jan 28 00:48:57.939882 kubelet[3036]: I0128 00:48:57.939554 3036 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:57.953099 kubelet[3036]: W0128 00:48:57.952583 3036 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:48:57.953099 kubelet[3036]: I0128 00:48:57.952715 3036 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:57.960489 kubelet[3036]: W0128 00:48:57.960429 3036 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:48:57.960902 kubelet[3036]: I0128 00:48:57.960801 3036 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:57.970166 kubelet[3036]: W0128 00:48:57.970033 3036 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:48:58.528635 kubelet[3036]: I0128 00:48:58.528354 3036 apiserver.go:52] "Watching apiserver" Jan 28 00:48:58.539402 kubelet[3036]: I0128 00:48:58.539350 3036 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 00:48:58.686783 systemd[1]: Reload requested from client PID 3305 ('systemctl') (unit session-9.scope)... Jan 28 00:48:58.686818 systemd[1]: Reloading... Jan 28 00:48:58.785199 zram_generator::config[3358]: No configuration found. Jan 28 00:48:58.932688 systemd[1]: Reloading finished in 245 ms. Jan 28 00:48:58.957415 kubelet[3036]: I0128 00:48:58.957373 3036 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:48:58.959096 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:58.974873 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 00:48:58.976966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:58.977032 systemd[1]: kubelet.service: Consumed 530ms CPU time, 127.6M memory peak. Jan 28 00:48:58.980172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:48:59.082723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:48:59.089264 (kubelet)[3415]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:48:59.234657 kubelet[3415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:48:59.234657 kubelet[3415]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:48:59.234657 kubelet[3415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:48:59.236294 kubelet[3415]: I0128 00:48:59.235247 3415 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:48:59.240906 kubelet[3415]: I0128 00:48:59.240875 3415 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 00:48:59.240906 kubelet[3415]: I0128 00:48:59.240901 3415 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:48:59.242683 kubelet[3415]: I0128 00:48:59.241438 3415 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 00:48:59.243060 kubelet[3415]: I0128 00:48:59.243043 3415 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 00:48:59.245166 kubelet[3415]: I0128 00:48:59.245149 3415 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:48:59.248132 kubelet[3415]: I0128 00:48:59.248116 3415 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 00:48:59.250765 kubelet[3415]: I0128 00:48:59.250747 3415 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 00:48:59.251068 kubelet[3415]: I0128 00:48:59.251041 3415 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:48:59.251274 kubelet[3415]: I0128 00:48:59.251132 3415 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-n-ee3b3e4916","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:48:59.251415 kubelet[3415]: I0128 00:48:59.251401 3415 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:48:59.251463 kubelet[3415]: I0128 00:48:59.251456 3415 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 00:48:59.251538 kubelet[3415]: I0128 00:48:59.251530 3415 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:48:59.251730 kubelet[3415]: I0128 00:48:59.251717 3415 kubelet.go:446] "Attempting to sync node with API server" Jan 28 00:48:59.251794 kubelet[3415]: I0128 00:48:59.251785 3415 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:48:59.251860 kubelet[3415]: I0128 00:48:59.251852 3415 kubelet.go:352] "Adding apiserver pod source" Jan 28 00:48:59.251930 kubelet[3415]: I0128 00:48:59.251902 3415 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:48:59.254964 kubelet[3415]: I0128 00:48:59.254934 3415 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 28 00:48:59.255321 kubelet[3415]: I0128 00:48:59.255305 3415 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 00:48:59.255709 kubelet[3415]: I0128 00:48:59.255689 3415 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 00:48:59.255756 kubelet[3415]: I0128 00:48:59.255714 3415 server.go:1287] "Started kubelet" Jan 28 00:48:59.258342 kubelet[3415]: I0128 00:48:59.258321 3415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:48:59.263683 kubelet[3415]: E0128 00:48:59.263665 3415 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:48:59.265142 kubelet[3415]: I0128 00:48:59.265107 3415 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:48:59.267387 kubelet[3415]: I0128 00:48:59.267368 3415 server.go:479] "Adding debug handlers to kubelet server" Jan 28 00:48:59.268140 kubelet[3415]: I0128 00:48:59.268098 3415 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:48:59.268371 kubelet[3415]: I0128 00:48:59.268357 3415 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:48:59.271821 kubelet[3415]: I0128 00:48:59.271798 3415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 00:48:59.272622 kubelet[3415]: I0128 00:48:59.272605 3415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 00:48:59.272705 kubelet[3415]: I0128 00:48:59.272695 3415 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 00:48:59.272769 kubelet[3415]: I0128 00:48:59.272760 3415 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:48:59.272812 kubelet[3415]: I0128 00:48:59.272805 3415 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 00:48:59.272885 kubelet[3415]: E0128 00:48:59.272871 3415 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:48:59.277725 kubelet[3415]: I0128 00:48:59.277697 3415 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:48:59.278612 kubelet[3415]: I0128 00:48:59.278589 3415 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 00:48:59.280098 kubelet[3415]: I0128 00:48:59.279755 3415 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 00:48:59.280098 kubelet[3415]: I0128 00:48:59.279865 3415 reconciler.go:26] "Reconciler: start to sync state" Jan 28 00:48:59.283768 kubelet[3415]: I0128 00:48:59.283750 3415 factory.go:221] Registration of the containerd container factory successfully Jan 28 00:48:59.283865 kubelet[3415]: I0128 00:48:59.283854 3415 factory.go:221] Registration of the systemd container factory successfully Jan 28 00:48:59.284013 kubelet[3415]: I0128 00:48:59.283995 3415 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:48:59.320017 kubelet[3415]: I0128 00:48:59.319986 3415 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:48:59.320017 kubelet[3415]: I0128 00:48:59.320007 3415 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:48:59.320017 kubelet[3415]: I0128 00:48:59.320027 3415 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:48:59.320180 kubelet[3415]: I0128 00:48:59.320157 3415 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 00:48:59.320180 kubelet[3415]: I0128 00:48:59.320165 3415 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 00:48:59.320180 kubelet[3415]: I0128 00:48:59.320179 3415 policy_none.go:49] "None policy: Start" Jan 28 00:48:59.320233 kubelet[3415]: I0128 00:48:59.320186 3415 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 00:48:59.320233 kubelet[3415]: I0128 00:48:59.320194 3415 state_mem.go:35] "Initializing new in-memory state store" Jan 28 00:48:59.320274 kubelet[3415]: I0128 00:48:59.320261 3415 state_mem.go:75] "Updated machine memory state" Jan 28 00:48:59.323618 kubelet[3415]: I0128 00:48:59.323556 3415 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 00:48:59.324893 kubelet[3415]: I0128 00:48:59.324640 3415 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:48:59.325042 kubelet[3415]: I0128 00:48:59.325000 3415 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:48:59.325596 kubelet[3415]: I0128 00:48:59.325395 3415 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:48:59.328605 kubelet[3415]: E0128 00:48:59.327528 3415 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:48:59.374378 kubelet[3415]: I0128 00:48:59.374338 3415 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.374750 kubelet[3415]: I0128 00:48:59.374722 3415 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.374921 kubelet[3415]: I0128 00:48:59.374693 3415 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.392093 kubelet[3415]: W0128 00:48:59.392065 3415 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:48:59.392330 kubelet[3415]: E0128 00:48:59.392305 3415 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-n-ee3b3e4916\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.392977 kubelet[3415]: W0128 00:48:59.392963 3415 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:48:59.393280 kubelet[3415]: E0128 00:48:59.393230 3415 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.3-n-ee3b3e4916\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.393441 kubelet[3415]: W0128 00:48:59.393380 3415 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:48:59.393441 kubelet[3415]: E0128 00:48:59.393402 3415 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-ee3b3e4916\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.433885 kubelet[3415]: I0128 00:48:59.433845 3415 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.455095 kubelet[3415]: I0128 00:48:59.454945 3415 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.455095 kubelet[3415]: I0128 00:48:59.455031 3415 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.480349 kubelet[3415]: I0128 00:48:59.480303 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/89bfd097b0968965794eb09d82021c70-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-n-ee3b3e4916\" (UID: \"89bfd097b0968965794eb09d82021c70\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.480349 kubelet[3415]: I0128 00:48:59.480344 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3c4d8d19fdc9a4a3e4d2fc228962253-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-ee3b3e4916\" (UID: \"d3c4d8d19fdc9a4a3e4d2fc228962253\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.480349 kubelet[3415]: I0128 00:48:59.480356 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3c4d8d19fdc9a4a3e4d2fc228962253-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-n-ee3b3e4916\" (UID: \"d3c4d8d19fdc9a4a3e4d2fc228962253\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.480528 kubelet[3415]: I0128 00:48:59.480368 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3c4d8d19fdc9a4a3e4d2fc228962253-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-n-ee3b3e4916\" (UID: \"d3c4d8d19fdc9a4a3e4d2fc228962253\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.480528 kubelet[3415]: I0128 00:48:59.480380 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce7db24260ca8510dc6e78698800d7cd-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-n-ee3b3e4916\" (UID: \"ce7db24260ca8510dc6e78698800d7cd\") " pod="kube-system/kube-scheduler-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.480528 kubelet[3415]: I0128 00:48:59.480390 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/89bfd097b0968965794eb09d82021c70-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-n-ee3b3e4916\" (UID: \"89bfd097b0968965794eb09d82021c70\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.480528 kubelet[3415]: I0128 00:48:59.480403 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d3c4d8d19fdc9a4a3e4d2fc228962253-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-n-ee3b3e4916\" (UID: \"d3c4d8d19fdc9a4a3e4d2fc228962253\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.480528 kubelet[3415]: I0128 00:48:59.480412 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3c4d8d19fdc9a4a3e4d2fc228962253-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-n-ee3b3e4916\" (UID: \"d3c4d8d19fdc9a4a3e4d2fc228962253\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:48:59.480643 kubelet[3415]: I0128 00:48:59.480425 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/89bfd097b0968965794eb09d82021c70-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-n-ee3b3e4916\" (UID: \"89bfd097b0968965794eb09d82021c70\") " pod="kube-system/kube-apiserver-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:00.253563 kubelet[3415]: I0128 00:49:00.253514 3415 apiserver.go:52] "Watching apiserver" Jan 28 00:49:00.280460 kubelet[3415]: I0128 00:49:00.280415 3415 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 00:49:00.307993 kubelet[3415]: I0128 00:49:00.307263 3415 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:00.318943 kubelet[3415]: W0128 00:49:00.317993 3415 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 00:49:00.319327 kubelet[3415]: E0128 00:49:00.319189 3415 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-n-ee3b3e4916\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:00.340166 kubelet[3415]: I0128 00:49:00.340110 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.3-n-ee3b3e4916" podStartSLOduration=3.34009502 podStartE2EDuration="3.34009502s" podCreationTimestamp="2026-01-28 00:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:49:00.339850076 +0000 UTC m=+1.248162451" watchObservedRunningTime="2026-01-28 00:49:00.34009502 +0000 UTC m=+1.248407387" Jan 28 00:49:00.341052 kubelet[3415]: I0128 00:49:00.340927 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.3-n-ee3b3e4916" podStartSLOduration=3.340906078 podStartE2EDuration="3.340906078s" podCreationTimestamp="2026-01-28 00:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:49:00.328066378 +0000 UTC m=+1.236378761" watchObservedRunningTime="2026-01-28 00:49:00.340906078 +0000 UTC m=+1.249218453" Jan 28 00:49:05.018621 kubelet[3415]: I0128 00:49:05.018526 3415 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 00:49:05.019200 containerd[1882]: time="2026-01-28T00:49:05.019143839Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 00:49:05.019524 kubelet[3415]: I0128 00:49:05.019453 3415 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 00:49:05.772698 kubelet[3415]: I0128 00:49:05.772588 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.3-n-ee3b3e4916" podStartSLOduration=8.772570774 podStartE2EDuration="8.772570774s" podCreationTimestamp="2026-01-28 00:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:49:00.352090373 +0000 UTC m=+1.260402748" watchObservedRunningTime="2026-01-28 00:49:05.772570774 +0000 UTC m=+6.680883141" Jan 28 00:49:05.779948 systemd[1]: Created slice kubepods-besteffort-pod1a0a00c6_326e_4972_b7c2_25be689be4f0.slice - libcontainer container kubepods-besteffort-pod1a0a00c6_326e_4972_b7c2_25be689be4f0.slice. Jan 28 00:49:05.819653 kubelet[3415]: I0128 00:49:05.819327 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1a0a00c6-326e-4972-b7c2-25be689be4f0-kube-proxy\") pod \"kube-proxy-b9kqg\" (UID: \"1a0a00c6-326e-4972-b7c2-25be689be4f0\") " pod="kube-system/kube-proxy-b9kqg" Jan 28 00:49:05.819653 kubelet[3415]: I0128 00:49:05.819473 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a0a00c6-326e-4972-b7c2-25be689be4f0-lib-modules\") pod \"kube-proxy-b9kqg\" (UID: \"1a0a00c6-326e-4972-b7c2-25be689be4f0\") " pod="kube-system/kube-proxy-b9kqg" Jan 28 00:49:05.819653 kubelet[3415]: I0128 00:49:05.819544 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a0a00c6-326e-4972-b7c2-25be689be4f0-xtables-lock\") pod \"kube-proxy-b9kqg\" (UID: \"1a0a00c6-326e-4972-b7c2-25be689be4f0\") " pod="kube-system/kube-proxy-b9kqg" Jan 28 00:49:05.819653 kubelet[3415]: I0128 00:49:05.819574 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxpgs\" (UniqueName: \"kubernetes.io/projected/1a0a00c6-326e-4972-b7c2-25be689be4f0-kube-api-access-mxpgs\") pod \"kube-proxy-b9kqg\" (UID: \"1a0a00c6-326e-4972-b7c2-25be689be4f0\") " pod="kube-system/kube-proxy-b9kqg" Jan 28 00:49:06.087607 containerd[1882]: time="2026-01-28T00:49:06.087295767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9kqg,Uid:1a0a00c6-326e-4972-b7c2-25be689be4f0,Namespace:kube-system,Attempt:0,}" Jan 28 00:49:06.138668 containerd[1882]: time="2026-01-28T00:49:06.138620526Z" level=info msg="connecting to shim 481471843337a97edc257b83f55b7e1907933e4d3e660e846a3468177bb57431" address="unix:///run/containerd/s/ae1bd3d3a1b4c22c5be2d85b84849392b1e983aa190539303ab2d414e91edfe6" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:49:06.172174 systemd[1]: Started cri-containerd-481471843337a97edc257b83f55b7e1907933e4d3e660e846a3468177bb57431.scope - libcontainer container 481471843337a97edc257b83f55b7e1907933e4d3e660e846a3468177bb57431. Jan 28 00:49:06.179159 systemd[1]: Created slice kubepods-besteffort-podd8a9340c_27d4_485a_a873_4c35759d92b5.slice - libcontainer container kubepods-besteffort-podd8a9340c_27d4_485a_a873_4c35759d92b5.slice. Jan 28 00:49:06.207925 containerd[1882]: time="2026-01-28T00:49:06.207870935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b9kqg,Uid:1a0a00c6-326e-4972-b7c2-25be689be4f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"481471843337a97edc257b83f55b7e1907933e4d3e660e846a3468177bb57431\"" Jan 28 00:49:06.211760 containerd[1882]: time="2026-01-28T00:49:06.211722313Z" level=info msg="CreateContainer within sandbox \"481471843337a97edc257b83f55b7e1907933e4d3e660e846a3468177bb57431\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 00:49:06.223043 kubelet[3415]: I0128 00:49:06.222980 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfd4b\" (UniqueName: \"kubernetes.io/projected/d8a9340c-27d4-485a-a873-4c35759d92b5-kube-api-access-rfd4b\") pod \"tigera-operator-7dcd859c48-72ppg\" (UID: \"d8a9340c-27d4-485a-a873-4c35759d92b5\") " pod="tigera-operator/tigera-operator-7dcd859c48-72ppg" Jan 28 00:49:06.223043 kubelet[3415]: I0128 00:49:06.223020 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d8a9340c-27d4-485a-a873-4c35759d92b5-var-lib-calico\") pod \"tigera-operator-7dcd859c48-72ppg\" (UID: \"d8a9340c-27d4-485a-a873-4c35759d92b5\") " pod="tigera-operator/tigera-operator-7dcd859c48-72ppg" Jan 28 00:49:06.247473 containerd[1882]: time="2026-01-28T00:49:06.247170488Z" level=info msg="Container 7c24eb4956d1b1d43ddbf7ae53ab459ae60132d3cd469aea4a2fef19483793da: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:49:06.267194 containerd[1882]: time="2026-01-28T00:49:06.267144810Z" level=info msg="CreateContainer within sandbox \"481471843337a97edc257b83f55b7e1907933e4d3e660e846a3468177bb57431\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7c24eb4956d1b1d43ddbf7ae53ab459ae60132d3cd469aea4a2fef19483793da\"" Jan 28 00:49:06.267814 containerd[1882]: time="2026-01-28T00:49:06.267765510Z" level=info msg="StartContainer for \"7c24eb4956d1b1d43ddbf7ae53ab459ae60132d3cd469aea4a2fef19483793da\"" Jan 28 00:49:06.269156 containerd[1882]: time="2026-01-28T00:49:06.269129177Z" level=info msg="connecting to shim 7c24eb4956d1b1d43ddbf7ae53ab459ae60132d3cd469aea4a2fef19483793da" address="unix:///run/containerd/s/ae1bd3d3a1b4c22c5be2d85b84849392b1e983aa190539303ab2d414e91edfe6" protocol=ttrpc version=3 Jan 28 00:49:06.292093 systemd[1]: Started cri-containerd-7c24eb4956d1b1d43ddbf7ae53ab459ae60132d3cd469aea4a2fef19483793da.scope - libcontainer container 7c24eb4956d1b1d43ddbf7ae53ab459ae60132d3cd469aea4a2fef19483793da. Jan 28 00:49:06.353661 containerd[1882]: time="2026-01-28T00:49:06.353497459Z" level=info msg="StartContainer for \"7c24eb4956d1b1d43ddbf7ae53ab459ae60132d3cd469aea4a2fef19483793da\" returns successfully" Jan 28 00:49:06.483389 containerd[1882]: time="2026-01-28T00:49:06.483344153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-72ppg,Uid:d8a9340c-27d4-485a-a873-4c35759d92b5,Namespace:tigera-operator,Attempt:0,}" Jan 28 00:49:06.518591 containerd[1882]: time="2026-01-28T00:49:06.518212245Z" level=info msg="connecting to shim a10c0d54e46e8f1a5f3131b94592f077a35b82416771d2f3d62c149eae5c03f3" address="unix:///run/containerd/s/8eea000dda4b830446e7a7efe36b0b29464ecc4f6c31d4078f4f0170e76af9b8" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:49:06.534077 systemd[1]: Started cri-containerd-a10c0d54e46e8f1a5f3131b94592f077a35b82416771d2f3d62c149eae5c03f3.scope - libcontainer container a10c0d54e46e8f1a5f3131b94592f077a35b82416771d2f3d62c149eae5c03f3. Jan 28 00:49:06.568644 containerd[1882]: time="2026-01-28T00:49:06.568604942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-72ppg,Uid:d8a9340c-27d4-485a-a873-4c35759d92b5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a10c0d54e46e8f1a5f3131b94592f077a35b82416771d2f3d62c149eae5c03f3\"" Jan 28 00:49:06.571808 containerd[1882]: time="2026-01-28T00:49:06.571782779Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 00:49:08.327571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068370863.mount: Deactivated successfully. Jan 28 00:49:08.927043 containerd[1882]: time="2026-01-28T00:49:08.926991009Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:08.931245 containerd[1882]: time="2026-01-28T00:49:08.931208671Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 28 00:49:08.934954 containerd[1882]: time="2026-01-28T00:49:08.934772113Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:08.940929 containerd[1882]: time="2026-01-28T00:49:08.940888123Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:08.941587 containerd[1882]: time="2026-01-28T00:49:08.941303424Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.369371088s" Jan 28 00:49:08.941587 containerd[1882]: time="2026-01-28T00:49:08.941330169Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 28 00:49:08.943632 containerd[1882]: time="2026-01-28T00:49:08.943604881Z" level=info msg="CreateContainer within sandbox \"a10c0d54e46e8f1a5f3131b94592f077a35b82416771d2f3d62c149eae5c03f3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 00:49:08.969742 containerd[1882]: time="2026-01-28T00:49:08.969278841Z" level=info msg="Container 326008b1380baa0fd9c8b9e3fc31bbabe9277ff1889152e818415dc54c58b6ed: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:49:08.971155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011381955.mount: Deactivated successfully. Jan 28 00:49:08.987711 containerd[1882]: time="2026-01-28T00:49:08.987657801Z" level=info msg="CreateContainer within sandbox \"a10c0d54e46e8f1a5f3131b94592f077a35b82416771d2f3d62c149eae5c03f3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"326008b1380baa0fd9c8b9e3fc31bbabe9277ff1889152e818415dc54c58b6ed\"" Jan 28 00:49:08.988544 containerd[1882]: time="2026-01-28T00:49:08.988522413Z" level=info msg="StartContainer for \"326008b1380baa0fd9c8b9e3fc31bbabe9277ff1889152e818415dc54c58b6ed\"" Jan 28 00:49:08.989485 containerd[1882]: time="2026-01-28T00:49:08.989446690Z" level=info msg="connecting to shim 326008b1380baa0fd9c8b9e3fc31bbabe9277ff1889152e818415dc54c58b6ed" address="unix:///run/containerd/s/8eea000dda4b830446e7a7efe36b0b29464ecc4f6c31d4078f4f0170e76af9b8" protocol=ttrpc version=3 Jan 28 00:49:09.008081 systemd[1]: Started cri-containerd-326008b1380baa0fd9c8b9e3fc31bbabe9277ff1889152e818415dc54c58b6ed.scope - libcontainer container 326008b1380baa0fd9c8b9e3fc31bbabe9277ff1889152e818415dc54c58b6ed. Jan 28 00:49:09.037092 containerd[1882]: time="2026-01-28T00:49:09.037058675Z" level=info msg="StartContainer for \"326008b1380baa0fd9c8b9e3fc31bbabe9277ff1889152e818415dc54c58b6ed\" returns successfully" Jan 28 00:49:09.344858 kubelet[3415]: I0128 00:49:09.344556 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b9kqg" podStartSLOduration=4.344539374 podStartE2EDuration="4.344539374s" podCreationTimestamp="2026-01-28 00:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:49:07.338032554 +0000 UTC m=+8.246344929" watchObservedRunningTime="2026-01-28 00:49:09.344539374 +0000 UTC m=+10.252851741" Jan 28 00:49:12.476241 kubelet[3415]: I0128 00:49:12.476186 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-72ppg" podStartSLOduration=4.104741912 podStartE2EDuration="6.476166825s" podCreationTimestamp="2026-01-28 00:49:06 +0000 UTC" firstStartedPulling="2026-01-28 00:49:06.570545132 +0000 UTC m=+7.478857499" lastFinishedPulling="2026-01-28 00:49:08.941970045 +0000 UTC m=+9.850282412" observedRunningTime="2026-01-28 00:49:09.34477647 +0000 UTC m=+10.253088845" watchObservedRunningTime="2026-01-28 00:49:12.476166825 +0000 UTC m=+13.384479200" Jan 28 00:49:14.320424 sudo[2460]: pam_unix(sudo:session): session closed for user root Jan 28 00:49:14.393343 sshd[2459]: Connection closed by 10.200.16.10 port 39018 Jan 28 00:49:14.392451 sshd-session[2456]: pam_unix(sshd:session): session closed for user core Jan 28 00:49:14.396345 systemd[1]: sshd@6-10.200.20.30:22-10.200.16.10:39018.service: Deactivated successfully. Jan 28 00:49:14.398636 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 00:49:14.401121 systemd[1]: session-9.scope: Consumed 2.899s CPU time, 219.3M memory peak. Jan 28 00:49:14.404276 systemd-logind[1867]: Session 9 logged out. Waiting for processes to exit. Jan 28 00:49:14.407258 systemd-logind[1867]: Removed session 9. Jan 28 00:49:21.574292 systemd[1]: Created slice kubepods-besteffort-pod005f0edd_4dcf_4675_90dc_d081c10bc651.slice - libcontainer container kubepods-besteffort-pod005f0edd_4dcf_4675_90dc_d081c10bc651.slice. Jan 28 00:49:21.621452 kubelet[3415]: I0128 00:49:21.621380 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/005f0edd-4dcf-4675-90dc-d081c10bc651-typha-certs\") pod \"calico-typha-7df67f4dfc-d5grg\" (UID: \"005f0edd-4dcf-4675-90dc-d081c10bc651\") " pod="calico-system/calico-typha-7df67f4dfc-d5grg" Jan 28 00:49:21.621452 kubelet[3415]: I0128 00:49:21.621428 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/005f0edd-4dcf-4675-90dc-d081c10bc651-tigera-ca-bundle\") pod \"calico-typha-7df67f4dfc-d5grg\" (UID: \"005f0edd-4dcf-4675-90dc-d081c10bc651\") " pod="calico-system/calico-typha-7df67f4dfc-d5grg" Jan 28 00:49:21.621452 kubelet[3415]: I0128 00:49:21.621452 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvrpj\" (UniqueName: \"kubernetes.io/projected/005f0edd-4dcf-4675-90dc-d081c10bc651-kube-api-access-hvrpj\") pod \"calico-typha-7df67f4dfc-d5grg\" (UID: \"005f0edd-4dcf-4675-90dc-d081c10bc651\") " pod="calico-system/calico-typha-7df67f4dfc-d5grg" Jan 28 00:49:21.799446 systemd[1]: Created slice kubepods-besteffort-pod8c8ceab6_d026_4c45_a854_0e00aeb807b2.slice - libcontainer container kubepods-besteffort-pod8c8ceab6_d026_4c45_a854_0e00aeb807b2.slice. Jan 28 00:49:21.822278 kubelet[3415]: I0128 00:49:21.822207 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8c8ceab6-d026-4c45-a854-0e00aeb807b2-cni-log-dir\") pod \"calico-node-t4vxq\" (UID: \"8c8ceab6-d026-4c45-a854-0e00aeb807b2\") " pod="calico-system/calico-node-t4vxq" Jan 28 00:49:21.822278 kubelet[3415]: I0128 00:49:21.822277 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8c8ceab6-d026-4c45-a854-0e00aeb807b2-cni-net-dir\") pod \"calico-node-t4vxq\" (UID: \"8c8ceab6-d026-4c45-a854-0e00aeb807b2\") " pod="calico-system/calico-node-t4vxq" Jan 28 00:49:21.822278 kubelet[3415]: I0128 00:49:21.822289 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8c8ceab6-d026-4c45-a854-0e00aeb807b2-node-certs\") pod \"calico-node-t4vxq\" (UID: \"8c8ceab6-d026-4c45-a854-0e00aeb807b2\") " pod="calico-system/calico-node-t4vxq" Jan 28 00:49:21.822461 kubelet[3415]: I0128 00:49:21.822300 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8c8ceab6-d026-4c45-a854-0e00aeb807b2-policysync\") pod \"calico-node-t4vxq\" (UID: \"8c8ceab6-d026-4c45-a854-0e00aeb807b2\") " pod="calico-system/calico-node-t4vxq" Jan 28 00:49:21.822461 kubelet[3415]: I0128 00:49:21.822331 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c8ceab6-d026-4c45-a854-0e00aeb807b2-xtables-lock\") pod \"calico-node-t4vxq\" (UID: \"8c8ceab6-d026-4c45-a854-0e00aeb807b2\") " pod="calico-system/calico-node-t4vxq" Jan 28 00:49:21.822461 kubelet[3415]: I0128 00:49:21.822347 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8c8ceab6-d026-4c45-a854-0e00aeb807b2-cni-bin-dir\") pod \"calico-node-t4vxq\" (UID: \"8c8ceab6-d026-4c45-a854-0e00aeb807b2\") " pod="calico-system/calico-node-t4vxq" Jan 28 00:49:21.822461 kubelet[3415]: I0128 00:49:21.822355 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8c8ceab6-d026-4c45-a854-0e00aeb807b2-var-run-calico\") pod \"calico-node-t4vxq\" (UID: \"8c8ceab6-d026-4c45-a854-0e00aeb807b2\") " pod="calico-system/calico-node-t4vxq" Jan 28 00:49:21.822461 kubelet[3415]: I0128 00:49:21.822366 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c8ceab6-d026-4c45-a854-0e00aeb807b2-var-lib-calico\") pod \"calico-node-t4vxq\" (UID: \"8c8ceab6-d026-4c45-a854-0e00aeb807b2\") " pod="calico-system/calico-node-t4vxq" Jan 28 00:49:21.822543 kubelet[3415]: I0128 00:49:21.822375 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv9rr\" (UniqueName: \"kubernetes.io/projected/8c8ceab6-d026-4c45-a854-0e00aeb807b2-kube-api-access-jv9rr\") pod \"calico-node-t4vxq\" (UID: \"8c8ceab6-d026-4c45-a854-0e00aeb807b2\") " pod="calico-system/calico-node-t4vxq" Jan 28 00:49:21.822543 kubelet[3415]: I0128 00:49:21.822414 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8c8ceab6-d026-4c45-a854-0e00aeb807b2-flexvol-driver-host\") pod \"calico-node-t4vxq\" (UID: \"8c8ceab6-d026-4c45-a854-0e00aeb807b2\") " pod="calico-system/calico-node-t4vxq" Jan 28 00:49:21.822543 kubelet[3415]: I0128 00:49:21.822426 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c8ceab6-d026-4c45-a854-0e00aeb807b2-lib-modules\") pod \"calico-node-t4vxq\" (UID: \"8c8ceab6-d026-4c45-a854-0e00aeb807b2\") " pod="calico-system/calico-node-t4vxq" Jan 28 00:49:21.822543 kubelet[3415]: I0128 00:49:21.822437 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c8ceab6-d026-4c45-a854-0e00aeb807b2-tigera-ca-bundle\") pod \"calico-node-t4vxq\" (UID: \"8c8ceab6-d026-4c45-a854-0e00aeb807b2\") " pod="calico-system/calico-node-t4vxq" Jan 28 00:49:21.879485 containerd[1882]: time="2026-01-28T00:49:21.879354038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7df67f4dfc-d5grg,Uid:005f0edd-4dcf-4675-90dc-d081c10bc651,Namespace:calico-system,Attempt:0,}" Jan 28 00:49:21.929698 kubelet[3415]: E0128 00:49:21.929665 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:21.931999 kubelet[3415]: W0128 00:49:21.929753 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:21.931999 kubelet[3415]: E0128 00:49:21.929774 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:21.936051 kubelet[3415]: E0128 00:49:21.935868 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:21.936051 kubelet[3415]: W0128 00:49:21.935993 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:21.936051 kubelet[3415]: E0128 00:49:21.936016 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:21.937956 containerd[1882]: time="2026-01-28T00:49:21.937882742Z" level=info msg="connecting to shim 65b25409aeac7e5586ca820525ed07eb3a842c0768514586a42cf185822887be" address="unix:///run/containerd/s/59118ad152bac84be7b1b6002322cb5b1dc26be2920aa0aec88464500dab1037" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:49:21.940361 kubelet[3415]: E0128 00:49:21.940322 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:21.940361 kubelet[3415]: W0128 00:49:21.940336 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:21.940494 kubelet[3415]: E0128 00:49:21.940352 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:21.966442 systemd[1]: Started cri-containerd-65b25409aeac7e5586ca820525ed07eb3a842c0768514586a42cf185822887be.scope - libcontainer container 65b25409aeac7e5586ca820525ed07eb3a842c0768514586a42cf185822887be. Jan 28 00:49:22.003164 kubelet[3415]: E0128 00:49:22.002798 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:49:22.009978 kubelet[3415]: E0128 00:49:22.009796 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.009978 kubelet[3415]: W0128 00:49:22.009875 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.009978 kubelet[3415]: E0128 00:49:22.009906 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.010525 kubelet[3415]: E0128 00:49:22.010477 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.010525 kubelet[3415]: W0128 00:49:22.010490 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.010525 kubelet[3415]: E0128 00:49:22.010501 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.011085 kubelet[3415]: E0128 00:49:22.011013 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.011085 kubelet[3415]: W0128 00:49:22.011026 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.011085 kubelet[3415]: E0128 00:49:22.011037 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.011478 kubelet[3415]: E0128 00:49:22.011406 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.011478 kubelet[3415]: W0128 00:49:22.011416 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.011478 kubelet[3415]: E0128 00:49:22.011426 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.011989 kubelet[3415]: E0128 00:49:22.011931 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.011989 kubelet[3415]: W0128 00:49:22.011945 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.011989 kubelet[3415]: E0128 00:49:22.011955 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.012484 kubelet[3415]: E0128 00:49:22.012413 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.012484 kubelet[3415]: W0128 00:49:22.012424 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.012484 kubelet[3415]: E0128 00:49:22.012450 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.013044 kubelet[3415]: E0128 00:49:22.012981 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.013044 kubelet[3415]: W0128 00:49:22.012993 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.013044 kubelet[3415]: E0128 00:49:22.013007 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.013504 kubelet[3415]: E0128 00:49:22.013494 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.013724 kubelet[3415]: W0128 00:49:22.013640 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.013724 kubelet[3415]: E0128 00:49:22.013676 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.014142 kubelet[3415]: E0128 00:49:22.014129 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.014490 kubelet[3415]: W0128 00:49:22.014199 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.014490 kubelet[3415]: E0128 00:49:22.014211 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.014807 kubelet[3415]: E0128 00:49:22.014796 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.014877 kubelet[3415]: W0128 00:49:22.014868 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.014958 kubelet[3415]: E0128 00:49:22.014950 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.015211 kubelet[3415]: E0128 00:49:22.015196 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.015336 kubelet[3415]: W0128 00:49:22.015263 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.015336 kubelet[3415]: E0128 00:49:22.015276 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.015499 kubelet[3415]: E0128 00:49:22.015490 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.015632 kubelet[3415]: W0128 00:49:22.015543 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.015632 kubelet[3415]: E0128 00:49:22.015555 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.016188 kubelet[3415]: E0128 00:49:22.016073 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.016188 kubelet[3415]: W0128 00:49:22.016087 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.016188 kubelet[3415]: E0128 00:49:22.016099 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.016397 kubelet[3415]: E0128 00:49:22.016313 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.016397 kubelet[3415]: W0128 00:49:22.016324 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.016397 kubelet[3415]: E0128 00:49:22.016333 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.016574 kubelet[3415]: E0128 00:49:22.016523 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.016574 kubelet[3415]: W0128 00:49:22.016532 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.016574 kubelet[3415]: E0128 00:49:22.016541 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.017008 kubelet[3415]: E0128 00:49:22.016836 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.017008 kubelet[3415]: W0128 00:49:22.016849 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.017008 kubelet[3415]: E0128 00:49:22.016859 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.017226 kubelet[3415]: E0128 00:49:22.017180 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.017226 kubelet[3415]: W0128 00:49:22.017190 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.017226 kubelet[3415]: E0128 00:49:22.017198 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.017404 kubelet[3415]: E0128 00:49:22.017395 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.017496 kubelet[3415]: W0128 00:49:22.017451 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.017496 kubelet[3415]: E0128 00:49:22.017465 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.017746 kubelet[3415]: E0128 00:49:22.017696 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.017746 kubelet[3415]: W0128 00:49:22.017706 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.017746 kubelet[3415]: E0128 00:49:22.017715 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.018061 kubelet[3415]: E0128 00:49:22.017985 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.018061 kubelet[3415]: W0128 00:49:22.017995 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.018061 kubelet[3415]: E0128 00:49:22.018004 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.022500 containerd[1882]: time="2026-01-28T00:49:22.022466558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7df67f4dfc-d5grg,Uid:005f0edd-4dcf-4675-90dc-d081c10bc651,Namespace:calico-system,Attempt:0,} returns sandbox id \"65b25409aeac7e5586ca820525ed07eb3a842c0768514586a42cf185822887be\"" Jan 28 00:49:22.024447 containerd[1882]: time="2026-01-28T00:49:22.024366745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 00:49:22.025895 kubelet[3415]: E0128 00:49:22.025874 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.026097 kubelet[3415]: W0128 00:49:22.025982 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.026097 kubelet[3415]: E0128 00:49:22.026000 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.026097 kubelet[3415]: I0128 00:49:22.026021 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bcaf25ee-c8ae-4368-867f-6ea868477814-registration-dir\") pod \"csi-node-driver-mzfbp\" (UID: \"bcaf25ee-c8ae-4368-867f-6ea868477814\") " pod="calico-system/csi-node-driver-mzfbp" Jan 28 00:49:22.026339 kubelet[3415]: E0128 00:49:22.026313 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.026497 kubelet[3415]: W0128 00:49:22.026423 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.026497 kubelet[3415]: E0128 00:49:22.026453 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.026497 kubelet[3415]: I0128 00:49:22.026472 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bcaf25ee-c8ae-4368-867f-6ea868477814-kubelet-dir\") pod \"csi-node-driver-mzfbp\" (UID: \"bcaf25ee-c8ae-4368-867f-6ea868477814\") " pod="calico-system/csi-node-driver-mzfbp" Jan 28 00:49:22.026661 kubelet[3415]: E0128 00:49:22.026637 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.026661 kubelet[3415]: W0128 00:49:22.026651 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.026819 kubelet[3415]: E0128 00:49:22.026667 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.026928 kubelet[3415]: E0128 00:49:22.026897 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.026928 kubelet[3415]: W0128 00:49:22.026906 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.027011 kubelet[3415]: E0128 00:49:22.026932 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.027060 kubelet[3415]: E0128 00:49:22.027040 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.027060 kubelet[3415]: W0128 00:49:22.027049 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.027060 kubelet[3415]: E0128 00:49:22.027059 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.027176 kubelet[3415]: I0128 00:49:22.027072 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bcaf25ee-c8ae-4368-867f-6ea868477814-varrun\") pod \"csi-node-driver-mzfbp\" (UID: \"bcaf25ee-c8ae-4368-867f-6ea868477814\") " pod="calico-system/csi-node-driver-mzfbp" Jan 28 00:49:22.027274 kubelet[3415]: E0128 00:49:22.027261 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.027274 kubelet[3415]: W0128 00:49:22.027270 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.027352 kubelet[3415]: E0128 00:49:22.027280 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.027352 kubelet[3415]: I0128 00:49:22.027292 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t2vd\" (UniqueName: \"kubernetes.io/projected/bcaf25ee-c8ae-4368-867f-6ea868477814-kube-api-access-2t2vd\") pod \"csi-node-driver-mzfbp\" (UID: \"bcaf25ee-c8ae-4368-867f-6ea868477814\") " pod="calico-system/csi-node-driver-mzfbp" Jan 28 00:49:22.027431 kubelet[3415]: E0128 00:49:22.027393 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.027431 kubelet[3415]: W0128 00:49:22.027398 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.027431 kubelet[3415]: E0128 00:49:22.027406 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.027431 kubelet[3415]: I0128 00:49:22.027416 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bcaf25ee-c8ae-4368-867f-6ea868477814-socket-dir\") pod \"csi-node-driver-mzfbp\" (UID: \"bcaf25ee-c8ae-4368-867f-6ea868477814\") " pod="calico-system/csi-node-driver-mzfbp" Jan 28 00:49:22.027727 kubelet[3415]: E0128 00:49:22.027566 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.027727 kubelet[3415]: W0128 00:49:22.027575 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.027727 kubelet[3415]: E0128 00:49:22.027588 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.027839 kubelet[3415]: E0128 00:49:22.027827 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.027889 kubelet[3415]: W0128 00:49:22.027880 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.027966 kubelet[3415]: E0128 00:49:22.027953 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.028280 kubelet[3415]: E0128 00:49:22.028255 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.028280 kubelet[3415]: W0128 00:49:22.028271 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.028280 kubelet[3415]: E0128 00:49:22.028287 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.028598 kubelet[3415]: E0128 00:49:22.028564 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.028689 kubelet[3415]: W0128 00:49:22.028578 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.028789 kubelet[3415]: E0128 00:49:22.028759 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.028890 kubelet[3415]: E0128 00:49:22.028861 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.028890 kubelet[3415]: W0128 00:49:22.028871 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.029173 kubelet[3415]: E0128 00:49:22.029091 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.029173 kubelet[3415]: W0128 00:49:22.029100 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.029173 kubelet[3415]: E0128 00:49:22.029112 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.029282 kubelet[3415]: E0128 00:49:22.029265 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.029334 kubelet[3415]: E0128 00:49:22.029292 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.029495 kubelet[3415]: W0128 00:49:22.029362 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.029495 kubelet[3415]: E0128 00:49:22.029374 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.029971 kubelet[3415]: E0128 00:49:22.029955 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.030078 kubelet[3415]: W0128 00:49:22.030045 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.030078 kubelet[3415]: E0128 00:49:22.030061 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.105594 containerd[1882]: time="2026-01-28T00:49:22.105231846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t4vxq,Uid:8c8ceab6-d026-4c45-a854-0e00aeb807b2,Namespace:calico-system,Attempt:0,}" Jan 28 00:49:22.127930 kubelet[3415]: E0128 00:49:22.127884 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.127930 kubelet[3415]: W0128 00:49:22.127919 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.127930 kubelet[3415]: E0128 00:49:22.127939 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.128355 kubelet[3415]: E0128 00:49:22.128129 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.128355 kubelet[3415]: W0128 00:49:22.128136 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.128355 kubelet[3415]: E0128 00:49:22.128144 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.128776 kubelet[3415]: E0128 00:49:22.128750 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.128776 kubelet[3415]: W0128 00:49:22.128771 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.128878 kubelet[3415]: E0128 00:49:22.128787 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.128959 kubelet[3415]: E0128 00:49:22.128949 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.128959 kubelet[3415]: W0128 00:49:22.128958 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.130256 kubelet[3415]: E0128 00:49:22.128970 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.130256 kubelet[3415]: E0128 00:49:22.130087 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.130256 kubelet[3415]: W0128 00:49:22.130098 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.130256 kubelet[3415]: E0128 00:49:22.130126 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.130420 kubelet[3415]: E0128 00:49:22.130394 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.130420 kubelet[3415]: W0128 00:49:22.130406 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.130526 kubelet[3415]: E0128 00:49:22.130453 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.130526 kubelet[3415]: E0128 00:49:22.130514 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.130526 kubelet[3415]: W0128 00:49:22.130522 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.130526 kubelet[3415]: E0128 00:49:22.130544 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.131710 kubelet[3415]: E0128 00:49:22.131693 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.131710 kubelet[3415]: W0128 00:49:22.131706 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.134027 kubelet[3415]: E0128 00:49:22.131947 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.134027 kubelet[3415]: E0128 00:49:22.132171 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.134027 kubelet[3415]: W0128 00:49:22.132182 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.134027 kubelet[3415]: E0128 00:49:22.132219 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.134027 kubelet[3415]: E0128 00:49:22.132325 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.134027 kubelet[3415]: W0128 00:49:22.132332 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.134027 kubelet[3415]: E0128 00:49:22.132369 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.134027 kubelet[3415]: E0128 00:49:22.132430 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.134027 kubelet[3415]: W0128 00:49:22.132435 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.134027 kubelet[3415]: E0128 00:49:22.132454 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.134189 kubelet[3415]: E0128 00:49:22.132518 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.134189 kubelet[3415]: W0128 00:49:22.132531 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.134189 kubelet[3415]: E0128 00:49:22.132555 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.134189 kubelet[3415]: E0128 00:49:22.132608 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.134189 kubelet[3415]: W0128 00:49:22.132613 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.134189 kubelet[3415]: E0128 00:49:22.132623 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.134189 kubelet[3415]: E0128 00:49:22.133303 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.134189 kubelet[3415]: W0128 00:49:22.133315 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.134189 kubelet[3415]: E0128 00:49:22.133332 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.134529 kubelet[3415]: E0128 00:49:22.134476 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.134529 kubelet[3415]: W0128 00:49:22.134490 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.134602 kubelet[3415]: E0128 00:49:22.134521 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.134920 kubelet[3415]: E0128 00:49:22.134864 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.134920 kubelet[3415]: W0128 00:49:22.134877 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.135000 kubelet[3415]: E0128 00:49:22.134902 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.136173 kubelet[3415]: E0128 00:49:22.136118 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.136173 kubelet[3415]: W0128 00:49:22.136132 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.136173 kubelet[3415]: E0128 00:49:22.136163 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.136442 kubelet[3415]: E0128 00:49:22.136398 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.136442 kubelet[3415]: W0128 00:49:22.136409 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.136442 kubelet[3415]: E0128 00:49:22.136431 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.137803 kubelet[3415]: E0128 00:49:22.136709 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.137803 kubelet[3415]: W0128 00:49:22.137718 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.137803 kubelet[3415]: E0128 00:49:22.137759 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.138075 kubelet[3415]: E0128 00:49:22.138063 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.139731 kubelet[3415]: W0128 00:49:22.138115 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.139731 kubelet[3415]: E0128 00:49:22.138147 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.139731 kubelet[3415]: E0128 00:49:22.138452 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.139731 kubelet[3415]: W0128 00:49:22.138464 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.139731 kubelet[3415]: E0128 00:49:22.138567 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.139731 kubelet[3415]: E0128 00:49:22.138638 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.139731 kubelet[3415]: W0128 00:49:22.138643 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.139731 kubelet[3415]: E0128 00:49:22.138784 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.139731 kubelet[3415]: W0128 00:49:22.138791 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.139731 kubelet[3415]: E0128 00:49:22.138947 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.140252 kubelet[3415]: W0128 00:49:22.138957 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.140252 kubelet[3415]: E0128 00:49:22.138967 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.140252 kubelet[3415]: E0128 00:49:22.138998 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.140252 kubelet[3415]: E0128 00:49:22.139008 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.140930 kubelet[3415]: E0128 00:49:22.140874 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.140930 kubelet[3415]: W0128 00:49:22.140889 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.140930 kubelet[3415]: E0128 00:49:22.140901 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.147802 kubelet[3415]: E0128 00:49:22.147739 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:22.147802 kubelet[3415]: W0128 00:49:22.147757 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:22.147802 kubelet[3415]: E0128 00:49:22.147772 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:22.159126 containerd[1882]: time="2026-01-28T00:49:22.159060324Z" level=info msg="connecting to shim ab76223edb68df54d365bb419236d96e6704a0122731b71f2c56fca7b817c78b" address="unix:///run/containerd/s/b8796dd3efa17dc358dfadcf065921f403e69f9f05e1b1ba8ebb8f75364e3794" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:49:22.187070 systemd[1]: Started cri-containerd-ab76223edb68df54d365bb419236d96e6704a0122731b71f2c56fca7b817c78b.scope - libcontainer container ab76223edb68df54d365bb419236d96e6704a0122731b71f2c56fca7b817c78b. Jan 28 00:49:22.218874 containerd[1882]: time="2026-01-28T00:49:22.218827203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t4vxq,Uid:8c8ceab6-d026-4c45-a854-0e00aeb807b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab76223edb68df54d365bb419236d96e6704a0122731b71f2c56fca7b817c78b\"" Jan 28 00:49:23.287230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143826634.mount: Deactivated successfully. Jan 28 00:49:24.273316 kubelet[3415]: E0128 00:49:24.273135 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:49:24.392620 containerd[1882]: time="2026-01-28T00:49:24.392365516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:24.396490 containerd[1882]: time="2026-01-28T00:49:24.396329791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 28 00:49:24.403271 containerd[1882]: time="2026-01-28T00:49:24.403232973Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:24.420351 containerd[1882]: time="2026-01-28T00:49:24.420308839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:24.420981 containerd[1882]: time="2026-01-28T00:49:24.420700259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.396304609s" Jan 28 00:49:24.420981 containerd[1882]: time="2026-01-28T00:49:24.420728972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 28 00:49:24.421571 containerd[1882]: time="2026-01-28T00:49:24.421546717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 00:49:24.434211 containerd[1882]: time="2026-01-28T00:49:24.434173773Z" level=info msg="CreateContainer within sandbox \"65b25409aeac7e5586ca820525ed07eb3a842c0768514586a42cf185822887be\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 00:49:24.457932 containerd[1882]: time="2026-01-28T00:49:24.456340637Z" level=info msg="Container af980723dbeda4c5b463d98a8fb88a1edfe7d1fda1ec6893d2368ace745da76a: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:49:24.477662 containerd[1882]: time="2026-01-28T00:49:24.477595696Z" level=info msg="CreateContainer within sandbox \"65b25409aeac7e5586ca820525ed07eb3a842c0768514586a42cf185822887be\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"af980723dbeda4c5b463d98a8fb88a1edfe7d1fda1ec6893d2368ace745da76a\"" Jan 28 00:49:24.478371 containerd[1882]: time="2026-01-28T00:49:24.478302174Z" level=info msg="StartContainer for \"af980723dbeda4c5b463d98a8fb88a1edfe7d1fda1ec6893d2368ace745da76a\"" Jan 28 00:49:24.479720 containerd[1882]: time="2026-01-28T00:49:24.479695017Z" level=info msg="connecting to shim af980723dbeda4c5b463d98a8fb88a1edfe7d1fda1ec6893d2368ace745da76a" address="unix:///run/containerd/s/59118ad152bac84be7b1b6002322cb5b1dc26be2920aa0aec88464500dab1037" protocol=ttrpc version=3 Jan 28 00:49:24.501063 systemd[1]: Started cri-containerd-af980723dbeda4c5b463d98a8fb88a1edfe7d1fda1ec6893d2368ace745da76a.scope - libcontainer container af980723dbeda4c5b463d98a8fb88a1edfe7d1fda1ec6893d2368ace745da76a. Jan 28 00:49:24.536926 containerd[1882]: time="2026-01-28T00:49:24.536681034Z" level=info msg="StartContainer for \"af980723dbeda4c5b463d98a8fb88a1edfe7d1fda1ec6893d2368ace745da76a\" returns successfully" Jan 28 00:49:25.372349 kubelet[3415]: I0128 00:49:25.372294 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7df67f4dfc-d5grg" podStartSLOduration=1.974728361 podStartE2EDuration="4.372280361s" podCreationTimestamp="2026-01-28 00:49:21 +0000 UTC" firstStartedPulling="2026-01-28 00:49:22.023856801 +0000 UTC m=+22.932169168" lastFinishedPulling="2026-01-28 00:49:24.421408801 +0000 UTC m=+25.329721168" observedRunningTime="2026-01-28 00:49:25.371880093 +0000 UTC m=+26.280192468" watchObservedRunningTime="2026-01-28 00:49:25.372280361 +0000 UTC m=+26.280592728" Jan 28 00:49:25.436166 kubelet[3415]: E0128 00:49:25.436129 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.436166 kubelet[3415]: W0128 00:49:25.436156 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.436166 kubelet[3415]: E0128 00:49:25.436179 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.436406 kubelet[3415]: E0128 00:49:25.436315 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.436406 kubelet[3415]: W0128 00:49:25.436320 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.436406 kubelet[3415]: E0128 00:49:25.436356 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.436547 kubelet[3415]: E0128 00:49:25.436463 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.436547 kubelet[3415]: W0128 00:49:25.436469 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.436547 kubelet[3415]: E0128 00:49:25.436475 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.436631 kubelet[3415]: E0128 00:49:25.436562 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.436631 kubelet[3415]: W0128 00:49:25.436567 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.436631 kubelet[3415]: E0128 00:49:25.436572 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.436733 kubelet[3415]: E0128 00:49:25.436665 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.436733 kubelet[3415]: W0128 00:49:25.436669 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.436733 kubelet[3415]: E0128 00:49:25.436674 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.436850 kubelet[3415]: E0128 00:49:25.436744 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.436850 kubelet[3415]: W0128 00:49:25.436748 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.436850 kubelet[3415]: E0128 00:49:25.436753 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.436850 kubelet[3415]: E0128 00:49:25.436822 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.436850 kubelet[3415]: W0128 00:49:25.436826 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.436850 kubelet[3415]: E0128 00:49:25.436830 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.437004 kubelet[3415]: E0128 00:49:25.436904 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.437004 kubelet[3415]: W0128 00:49:25.436935 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.437004 kubelet[3415]: E0128 00:49:25.436941 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.437085 kubelet[3415]: E0128 00:49:25.437029 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.437085 kubelet[3415]: W0128 00:49:25.437036 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.437085 kubelet[3415]: E0128 00:49:25.437042 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.437172 kubelet[3415]: E0128 00:49:25.437116 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.437172 kubelet[3415]: W0128 00:49:25.437120 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.437172 kubelet[3415]: E0128 00:49:25.437125 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.437255 kubelet[3415]: E0128 00:49:25.437196 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.437255 kubelet[3415]: W0128 00:49:25.437200 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.437255 kubelet[3415]: E0128 00:49:25.437205 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.437338 kubelet[3415]: E0128 00:49:25.437278 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.437338 kubelet[3415]: W0128 00:49:25.437283 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.437338 kubelet[3415]: E0128 00:49:25.437287 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.437428 kubelet[3415]: E0128 00:49:25.437373 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.437428 kubelet[3415]: W0128 00:49:25.437378 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.437428 kubelet[3415]: E0128 00:49:25.437383 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.437585 kubelet[3415]: E0128 00:49:25.437462 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.437585 kubelet[3415]: W0128 00:49:25.437466 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.437585 kubelet[3415]: E0128 00:49:25.437471 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.437585 kubelet[3415]: E0128 00:49:25.437539 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.437585 kubelet[3415]: W0128 00:49:25.437543 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.437585 kubelet[3415]: E0128 00:49:25.437547 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.451152 kubelet[3415]: E0128 00:49:25.451035 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.451152 kubelet[3415]: W0128 00:49:25.451058 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.451152 kubelet[3415]: E0128 00:49:25.451074 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.451505 kubelet[3415]: E0128 00:49:25.451465 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.451505 kubelet[3415]: W0128 00:49:25.451478 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.451643 kubelet[3415]: E0128 00:49:25.451587 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.451694 kubelet[3415]: E0128 00:49:25.451683 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.451740 kubelet[3415]: W0128 00:49:25.451695 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.451740 kubelet[3415]: E0128 00:49:25.451709 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.451828 kubelet[3415]: E0128 00:49:25.451819 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.451828 kubelet[3415]: W0128 00:49:25.451827 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.451828 kubelet[3415]: E0128 00:49:25.451837 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.451978 kubelet[3415]: E0128 00:49:25.451942 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.451978 kubelet[3415]: W0128 00:49:25.451948 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.451978 kubelet[3415]: E0128 00:49:25.451957 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.452177 kubelet[3415]: E0128 00:49:25.452111 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.452177 kubelet[3415]: W0128 00:49:25.452118 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.452177 kubelet[3415]: E0128 00:49:25.452128 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.452489 kubelet[3415]: E0128 00:49:25.452455 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.452489 kubelet[3415]: W0128 00:49:25.452467 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.452640 kubelet[3415]: E0128 00:49:25.452581 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.452888 kubelet[3415]: E0128 00:49:25.452863 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.452888 kubelet[3415]: W0128 00:49:25.452875 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.453157 kubelet[3415]: E0128 00:49:25.453134 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.453332 kubelet[3415]: E0128 00:49:25.453321 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.453447 kubelet[3415]: W0128 00:49:25.453393 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.453447 kubelet[3415]: E0128 00:49:25.453425 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.453685 kubelet[3415]: E0128 00:49:25.453633 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.453685 kubelet[3415]: W0128 00:49:25.453644 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.453685 kubelet[3415]: E0128 00:49:25.453666 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.454019 kubelet[3415]: E0128 00:49:25.453945 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.454019 kubelet[3415]: W0128 00:49:25.453956 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.454019 kubelet[3415]: E0128 00:49:25.453973 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.454414 kubelet[3415]: E0128 00:49:25.454291 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.454414 kubelet[3415]: W0128 00:49:25.454302 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.454414 kubelet[3415]: E0128 00:49:25.454322 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.454564 kubelet[3415]: E0128 00:49:25.454555 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.454618 kubelet[3415]: W0128 00:49:25.454608 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.454760 kubelet[3415]: E0128 00:49:25.454663 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.454847 kubelet[3415]: E0128 00:49:25.454832 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.454847 kubelet[3415]: W0128 00:49:25.454843 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.455016 kubelet[3415]: E0128 00:49:25.454851 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.455016 kubelet[3415]: E0128 00:49:25.454983 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.455016 kubelet[3415]: W0128 00:49:25.454990 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.455016 kubelet[3415]: E0128 00:49:25.455001 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.455219 kubelet[3415]: E0128 00:49:25.455111 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.455219 kubelet[3415]: W0128 00:49:25.455116 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.455219 kubelet[3415]: E0128 00:49:25.455125 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.456005 kubelet[3415]: E0128 00:49:25.455990 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.456463 kubelet[3415]: W0128 00:49:25.456442 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.456545 kubelet[3415]: E0128 00:49:25.456532 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.456770 kubelet[3415]: E0128 00:49:25.456752 3415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:49:25.456770 kubelet[3415]: W0128 00:49:25.456765 3415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:49:25.456831 kubelet[3415]: E0128 00:49:25.456775 3415 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:49:25.646562 containerd[1882]: time="2026-01-28T00:49:25.646399667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:25.651367 containerd[1882]: time="2026-01-28T00:49:25.651329044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 28 00:49:25.654862 containerd[1882]: time="2026-01-28T00:49:25.654805752Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:25.659580 containerd[1882]: time="2026-01-28T00:49:25.659132134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:25.659580 containerd[1882]: time="2026-01-28T00:49:25.659426655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.237856529s" Jan 28 00:49:25.659580 containerd[1882]: time="2026-01-28T00:49:25.659484089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 28 00:49:25.662495 containerd[1882]: time="2026-01-28T00:49:25.662456341Z" level=info msg="CreateContainer within sandbox \"ab76223edb68df54d365bb419236d96e6704a0122731b71f2c56fca7b817c78b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 00:49:25.697147 containerd[1882]: time="2026-01-28T00:49:25.697105592Z" level=info msg="Container 36c826fa3bd71df1999bbde69ab74e3d0b1d8e3abb1c323d5eab89dc7cce9ad3: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:49:25.721822 containerd[1882]: time="2026-01-28T00:49:25.721778838Z" level=info msg="CreateContainer within sandbox \"ab76223edb68df54d365bb419236d96e6704a0122731b71f2c56fca7b817c78b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"36c826fa3bd71df1999bbde69ab74e3d0b1d8e3abb1c323d5eab89dc7cce9ad3\"" Jan 28 00:49:25.723021 containerd[1882]: time="2026-01-28T00:49:25.722974755Z" level=info msg="StartContainer for \"36c826fa3bd71df1999bbde69ab74e3d0b1d8e3abb1c323d5eab89dc7cce9ad3\"" Jan 28 00:49:25.724256 containerd[1882]: time="2026-01-28T00:49:25.724196785Z" level=info msg="connecting to shim 36c826fa3bd71df1999bbde69ab74e3d0b1d8e3abb1c323d5eab89dc7cce9ad3" address="unix:///run/containerd/s/b8796dd3efa17dc358dfadcf065921f403e69f9f05e1b1ba8ebb8f75364e3794" protocol=ttrpc version=3 Jan 28 00:49:25.743051 systemd[1]: Started cri-containerd-36c826fa3bd71df1999bbde69ab74e3d0b1d8e3abb1c323d5eab89dc7cce9ad3.scope - libcontainer container 36c826fa3bd71df1999bbde69ab74e3d0b1d8e3abb1c323d5eab89dc7cce9ad3. Jan 28 00:49:25.805162 containerd[1882]: time="2026-01-28T00:49:25.805116711Z" level=info msg="StartContainer for \"36c826fa3bd71df1999bbde69ab74e3d0b1d8e3abb1c323d5eab89dc7cce9ad3\" returns successfully" Jan 28 00:49:25.811433 systemd[1]: cri-containerd-36c826fa3bd71df1999bbde69ab74e3d0b1d8e3abb1c323d5eab89dc7cce9ad3.scope: Deactivated successfully. Jan 28 00:49:25.814857 containerd[1882]: time="2026-01-28T00:49:25.814822244Z" level=info msg="received container exit event container_id:\"36c826fa3bd71df1999bbde69ab74e3d0b1d8e3abb1c323d5eab89dc7cce9ad3\" id:\"36c826fa3bd71df1999bbde69ab74e3d0b1d8e3abb1c323d5eab89dc7cce9ad3\" pid:4087 exited_at:{seconds:1769561365 nanos:814310413}" Jan 28 00:49:25.831266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36c826fa3bd71df1999bbde69ab74e3d0b1d8e3abb1c323d5eab89dc7cce9ad3-rootfs.mount: Deactivated successfully. Jan 28 00:49:26.273560 kubelet[3415]: E0128 00:49:26.273500 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:49:27.367559 containerd[1882]: time="2026-01-28T00:49:27.367520893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 00:49:28.274167 kubelet[3415]: E0128 00:49:28.274079 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:49:29.475647 containerd[1882]: time="2026-01-28T00:49:29.475135502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:29.478867 containerd[1882]: time="2026-01-28T00:49:29.478835068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 28 00:49:29.484082 containerd[1882]: time="2026-01-28T00:49:29.484054229Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:29.488552 containerd[1882]: time="2026-01-28T00:49:29.488502833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:29.488971 containerd[1882]: time="2026-01-28T00:49:29.488783969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.120835319s" Jan 28 00:49:29.488971 containerd[1882]: time="2026-01-28T00:49:29.488811378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 28 00:49:29.492069 containerd[1882]: time="2026-01-28T00:49:29.491470960Z" level=info msg="CreateContainer within sandbox \"ab76223edb68df54d365bb419236d96e6704a0122731b71f2c56fca7b817c78b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 00:49:29.520084 containerd[1882]: time="2026-01-28T00:49:29.520033363Z" level=info msg="Container 577caa9ce8919b5533937d93f65afbce4bd6c4f2b596879fd749219a2c5846db: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:49:29.539918 containerd[1882]: time="2026-01-28T00:49:29.539867524Z" level=info msg="CreateContainer within sandbox \"ab76223edb68df54d365bb419236d96e6704a0122731b71f2c56fca7b817c78b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"577caa9ce8919b5533937d93f65afbce4bd6c4f2b596879fd749219a2c5846db\"" Jan 28 00:49:29.540462 containerd[1882]: time="2026-01-28T00:49:29.540375819Z" level=info msg="StartContainer for \"577caa9ce8919b5533937d93f65afbce4bd6c4f2b596879fd749219a2c5846db\"" Jan 28 00:49:29.542659 containerd[1882]: time="2026-01-28T00:49:29.542630285Z" level=info msg="connecting to shim 577caa9ce8919b5533937d93f65afbce4bd6c4f2b596879fd749219a2c5846db" address="unix:///run/containerd/s/b8796dd3efa17dc358dfadcf065921f403e69f9f05e1b1ba8ebb8f75364e3794" protocol=ttrpc version=3 Jan 28 00:49:29.563079 systemd[1]: Started cri-containerd-577caa9ce8919b5533937d93f65afbce4bd6c4f2b596879fd749219a2c5846db.scope - libcontainer container 577caa9ce8919b5533937d93f65afbce4bd6c4f2b596879fd749219a2c5846db. Jan 28 00:49:29.631194 containerd[1882]: time="2026-01-28T00:49:29.631153633Z" level=info msg="StartContainer for \"577caa9ce8919b5533937d93f65afbce4bd6c4f2b596879fd749219a2c5846db\" returns successfully" Jan 28 00:49:30.274468 kubelet[3415]: E0128 00:49:30.274139 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:49:30.768102 containerd[1882]: time="2026-01-28T00:49:30.768063553Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:49:30.771328 containerd[1882]: time="2026-01-28T00:49:30.770754672Z" level=info msg="received container exit event container_id:\"577caa9ce8919b5533937d93f65afbce4bd6c4f2b596879fd749219a2c5846db\" id:\"577caa9ce8919b5533937d93f65afbce4bd6c4f2b596879fd749219a2c5846db\" pid:4148 exited_at:{seconds:1769561370 nanos:770593971}" Jan 28 00:49:30.770813 systemd[1]: cri-containerd-577caa9ce8919b5533937d93f65afbce4bd6c4f2b596879fd749219a2c5846db.scope: Deactivated successfully. Jan 28 00:49:30.771057 systemd[1]: cri-containerd-577caa9ce8919b5533937d93f65afbce4bd6c4f2b596879fd749219a2c5846db.scope: Consumed 339ms CPU time, 192.3M memory peak, 165.9M written to disk. Jan 28 00:49:30.791193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-577caa9ce8919b5533937d93f65afbce4bd6c4f2b596879fd749219a2c5846db-rootfs.mount: Deactivated successfully. Jan 28 00:49:30.822249 kubelet[3415]: I0128 00:49:30.822204 3415 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 00:49:31.206161 kubelet[3415]: I0128 00:49:30.886361 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1d36a1c-6e80-479f-89ad-90af3d46bf24-whisker-backend-key-pair\") pod \"whisker-74969c8b6-kpsjn\" (UID: \"e1d36a1c-6e80-479f-89ad-90af3d46bf24\") " pod="calico-system/whisker-74969c8b6-kpsjn" Jan 28 00:49:31.206161 kubelet[3415]: I0128 00:49:30.886408 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlpv8\" (UniqueName: \"kubernetes.io/projected/38f32bfc-20b9-44e6-8109-aef9d5b3412a-kube-api-access-dlpv8\") pod \"coredns-668d6bf9bc-cmh7z\" (UID: \"38f32bfc-20b9-44e6-8109-aef9d5b3412a\") " pod="kube-system/coredns-668d6bf9bc-cmh7z" Jan 28 00:49:31.206161 kubelet[3415]: I0128 00:49:30.886424 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gbdr\" (UniqueName: \"kubernetes.io/projected/e1d36a1c-6e80-479f-89ad-90af3d46bf24-kube-api-access-8gbdr\") pod \"whisker-74969c8b6-kpsjn\" (UID: \"e1d36a1c-6e80-479f-89ad-90af3d46bf24\") " pod="calico-system/whisker-74969c8b6-kpsjn" Jan 28 00:49:31.206161 kubelet[3415]: I0128 00:49:30.886435 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/38f32bfc-20b9-44e6-8109-aef9d5b3412a-config-volume\") pod \"coredns-668d6bf9bc-cmh7z\" (UID: \"38f32bfc-20b9-44e6-8109-aef9d5b3412a\") " pod="kube-system/coredns-668d6bf9bc-cmh7z" Jan 28 00:49:31.206161 kubelet[3415]: I0128 00:49:30.886452 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1d36a1c-6e80-479f-89ad-90af3d46bf24-whisker-ca-bundle\") pod \"whisker-74969c8b6-kpsjn\" (UID: \"e1d36a1c-6e80-479f-89ad-90af3d46bf24\") " pod="calico-system/whisker-74969c8b6-kpsjn" Jan 28 00:49:30.875468 systemd[1]: Created slice kubepods-besteffort-pode1d36a1c_6e80_479f_89ad_90af3d46bf24.slice - libcontainer container kubepods-besteffort-pode1d36a1c_6e80_479f_89ad_90af3d46bf24.slice. Jan 28 00:49:31.206473 kubelet[3415]: I0128 00:49:30.886464 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c202d96b-00b7-4bac-b3b9-1cca06ac3857-config-volume\") pod \"coredns-668d6bf9bc-gdqnw\" (UID: \"c202d96b-00b7-4bac-b3b9-1cca06ac3857\") " pod="kube-system/coredns-668d6bf9bc-gdqnw" Jan 28 00:49:31.206473 kubelet[3415]: I0128 00:49:30.886475 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bbfz\" (UniqueName: \"kubernetes.io/projected/c202d96b-00b7-4bac-b3b9-1cca06ac3857-kube-api-access-6bbfz\") pod \"coredns-668d6bf9bc-gdqnw\" (UID: \"c202d96b-00b7-4bac-b3b9-1cca06ac3857\") " pod="kube-system/coredns-668d6bf9bc-gdqnw" Jan 28 00:49:31.206473 kubelet[3415]: I0128 00:49:30.987284 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77965b9f-fcdf-4986-8206-fcf9912f3435-calico-apiserver-certs\") pod \"calico-apiserver-6b468596cf-lg47w\" (UID: \"77965b9f-fcdf-4986-8206-fcf9912f3435\") " pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" Jan 28 00:49:31.206473 kubelet[3415]: I0128 00:49:30.987333 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44a3119-d828-410a-8e9a-303c84462c56-config\") pod \"goldmane-666569f655-thw89\" (UID: \"f44a3119-d828-410a-8e9a-303c84462c56\") " pod="calico-system/goldmane-666569f655-thw89" Jan 28 00:49:31.206473 kubelet[3415]: I0128 00:49:30.987345 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8r8t\" (UniqueName: \"kubernetes.io/projected/f44a3119-d828-410a-8e9a-303c84462c56-kube-api-access-t8r8t\") pod \"goldmane-666569f655-thw89\" (UID: \"f44a3119-d828-410a-8e9a-303c84462c56\") " pod="calico-system/goldmane-666569f655-thw89" Jan 28 00:49:30.882781 systemd[1]: Created slice kubepods-burstable-pod38f32bfc_20b9_44e6_8109_aef9d5b3412a.slice - libcontainer container kubepods-burstable-pod38f32bfc_20b9_44e6_8109_aef9d5b3412a.slice. Jan 28 00:49:31.206609 kubelet[3415]: I0128 00:49:30.987356 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/098ac4f0-5200-473e-8687-26a347b0e3eb-calico-apiserver-certs\") pod \"calico-apiserver-6b468596cf-28ns6\" (UID: \"098ac4f0-5200-473e-8687-26a347b0e3eb\") " pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" Jan 28 00:49:31.206609 kubelet[3415]: I0128 00:49:30.987376 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlsdt\" (UniqueName: \"kubernetes.io/projected/098ac4f0-5200-473e-8687-26a347b0e3eb-kube-api-access-tlsdt\") pod \"calico-apiserver-6b468596cf-28ns6\" (UID: \"098ac4f0-5200-473e-8687-26a347b0e3eb\") " pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" Jan 28 00:49:31.206609 kubelet[3415]: I0128 00:49:30.987396 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4wx2\" (UniqueName: \"kubernetes.io/projected/542feb1a-1c08-4a08-96ca-01553cfa6389-kube-api-access-p4wx2\") pod \"calico-kube-controllers-646548674d-lzmnt\" (UID: \"542feb1a-1c08-4a08-96ca-01553cfa6389\") " pod="calico-system/calico-kube-controllers-646548674d-lzmnt" Jan 28 00:49:31.206609 kubelet[3415]: I0128 00:49:30.987408 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzxj2\" (UniqueName: \"kubernetes.io/projected/77965b9f-fcdf-4986-8206-fcf9912f3435-kube-api-access-pzxj2\") pod \"calico-apiserver-6b468596cf-lg47w\" (UID: \"77965b9f-fcdf-4986-8206-fcf9912f3435\") " pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" Jan 28 00:49:31.206609 kubelet[3415]: I0128 00:49:30.987420 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f44a3119-d828-410a-8e9a-303c84462c56-goldmane-ca-bundle\") pod \"goldmane-666569f655-thw89\" (UID: \"f44a3119-d828-410a-8e9a-303c84462c56\") " pod="calico-system/goldmane-666569f655-thw89" Jan 28 00:49:30.890987 systemd[1]: Created slice kubepods-burstable-podc202d96b_00b7_4bac_b3b9_1cca06ac3857.slice - libcontainer container kubepods-burstable-podc202d96b_00b7_4bac_b3b9_1cca06ac3857.slice. Jan 28 00:49:31.206734 kubelet[3415]: I0128 00:49:30.987429 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f44a3119-d828-410a-8e9a-303c84462c56-goldmane-key-pair\") pod \"goldmane-666569f655-thw89\" (UID: \"f44a3119-d828-410a-8e9a-303c84462c56\") " pod="calico-system/goldmane-666569f655-thw89" Jan 28 00:49:31.206734 kubelet[3415]: I0128 00:49:30.987450 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/542feb1a-1c08-4a08-96ca-01553cfa6389-tigera-ca-bundle\") pod \"calico-kube-controllers-646548674d-lzmnt\" (UID: \"542feb1a-1c08-4a08-96ca-01553cfa6389\") " pod="calico-system/calico-kube-controllers-646548674d-lzmnt" Jan 28 00:49:30.899458 systemd[1]: Created slice kubepods-besteffort-pod77965b9f_fcdf_4986_8206_fcf9912f3435.slice - libcontainer container kubepods-besteffort-pod77965b9f_fcdf_4986_8206_fcf9912f3435.slice. Jan 28 00:49:30.908618 systemd[1]: Created slice kubepods-besteffort-pod542feb1a_1c08_4a08_96ca_01553cfa6389.slice - libcontainer container kubepods-besteffort-pod542feb1a_1c08_4a08_96ca_01553cfa6389.slice. Jan 28 00:49:30.915185 systemd[1]: Created slice kubepods-besteffort-podf44a3119_d828_410a_8e9a_303c84462c56.slice - libcontainer container kubepods-besteffort-podf44a3119_d828_410a_8e9a_303c84462c56.slice. Jan 28 00:49:30.921246 systemd[1]: Created slice kubepods-besteffort-pod098ac4f0_5200_473e_8687_26a347b0e3eb.slice - libcontainer container kubepods-besteffort-pod098ac4f0_5200_473e_8687_26a347b0e3eb.slice. Jan 28 00:49:31.510430 containerd[1882]: time="2026-01-28T00:49:31.510158883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74969c8b6-kpsjn,Uid:e1d36a1c-6e80-479f-89ad-90af3d46bf24,Namespace:calico-system,Attempt:0,}" Jan 28 00:49:31.515510 containerd[1882]: time="2026-01-28T00:49:31.515305555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b468596cf-28ns6,Uid:098ac4f0-5200-473e-8687-26a347b0e3eb,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:49:31.515735 containerd[1882]: time="2026-01-28T00:49:31.515717535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-646548674d-lzmnt,Uid:542feb1a-1c08-4a08-96ca-01553cfa6389,Namespace:calico-system,Attempt:0,}" Jan 28 00:49:31.517169 containerd[1882]: time="2026-01-28T00:49:31.517145601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdqnw,Uid:c202d96b-00b7-4bac-b3b9-1cca06ac3857,Namespace:kube-system,Attempt:0,}" Jan 28 00:49:31.531810 containerd[1882]: time="2026-01-28T00:49:31.531786313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b468596cf-lg47w,Uid:77965b9f-fcdf-4986-8206-fcf9912f3435,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:49:31.532137 containerd[1882]: time="2026-01-28T00:49:31.531993447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-thw89,Uid:f44a3119-d828-410a-8e9a-303c84462c56,Namespace:calico-system,Attempt:0,}" Jan 28 00:49:31.537645 containerd[1882]: time="2026-01-28T00:49:31.537624589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cmh7z,Uid:38f32bfc-20b9-44e6-8109-aef9d5b3412a,Namespace:kube-system,Attempt:0,}" Jan 28 00:49:31.866998 containerd[1882]: time="2026-01-28T00:49:31.866946599Z" level=error msg="Failed to destroy network for sandbox \"cae52b97c099a2a9db28669c3a2e02bd0ee4747d40daa302983247e9cf3b15ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.868540 systemd[1]: run-netns-cni\x2d134ade29\x2d4db8\x2d45f1\x2deb02\x2de64aaea4435c.mount: Deactivated successfully. Jan 28 00:49:31.875010 containerd[1882]: time="2026-01-28T00:49:31.874954883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74969c8b6-kpsjn,Uid:e1d36a1c-6e80-479f-89ad-90af3d46bf24,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae52b97c099a2a9db28669c3a2e02bd0ee4747d40daa302983247e9cf3b15ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.875311 kubelet[3415]: E0128 00:49:31.875264 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae52b97c099a2a9db28669c3a2e02bd0ee4747d40daa302983247e9cf3b15ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.875736 kubelet[3415]: E0128 00:49:31.875594 3415 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae52b97c099a2a9db28669c3a2e02bd0ee4747d40daa302983247e9cf3b15ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74969c8b6-kpsjn" Jan 28 00:49:31.875736 kubelet[3415]: E0128 00:49:31.875618 3415 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae52b97c099a2a9db28669c3a2e02bd0ee4747d40daa302983247e9cf3b15ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74969c8b6-kpsjn" Jan 28 00:49:31.875736 kubelet[3415]: E0128 00:49:31.875659 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-74969c8b6-kpsjn_calico-system(e1d36a1c-6e80-479f-89ad-90af3d46bf24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-74969c8b6-kpsjn_calico-system(e1d36a1c-6e80-479f-89ad-90af3d46bf24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cae52b97c099a2a9db28669c3a2e02bd0ee4747d40daa302983247e9cf3b15ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74969c8b6-kpsjn" podUID="e1d36a1c-6e80-479f-89ad-90af3d46bf24" Jan 28 00:49:31.877986 containerd[1882]: time="2026-01-28T00:49:31.877955980Z" level=error msg="Failed to destroy network for sandbox \"12122c900da8f72305e99953c1ed9af12e82f9f46e05e68d5394a7827e57518e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.880517 systemd[1]: run-netns-cni\x2dbfc03889\x2dce6b\x2d258c\x2d04ef\x2d0cc4328701c4.mount: Deactivated successfully. Jan 28 00:49:31.884004 containerd[1882]: time="2026-01-28T00:49:31.883646308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b468596cf-lg47w,Uid:77965b9f-fcdf-4986-8206-fcf9912f3435,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12122c900da8f72305e99953c1ed9af12e82f9f46e05e68d5394a7827e57518e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.884251 kubelet[3415]: E0128 00:49:31.884205 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12122c900da8f72305e99953c1ed9af12e82f9f46e05e68d5394a7827e57518e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.884397 kubelet[3415]: E0128 00:49:31.884352 3415 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12122c900da8f72305e99953c1ed9af12e82f9f46e05e68d5394a7827e57518e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" Jan 28 00:49:31.884397 kubelet[3415]: E0128 00:49:31.884374 3415 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12122c900da8f72305e99953c1ed9af12e82f9f46e05e68d5394a7827e57518e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" Jan 28 00:49:31.884521 kubelet[3415]: E0128 00:49:31.884501 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b468596cf-lg47w_calico-apiserver(77965b9f-fcdf-4986-8206-fcf9912f3435)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b468596cf-lg47w_calico-apiserver(77965b9f-fcdf-4986-8206-fcf9912f3435)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12122c900da8f72305e99953c1ed9af12e82f9f46e05e68d5394a7827e57518e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:49:31.897270 containerd[1882]: time="2026-01-28T00:49:31.897166859Z" level=error msg="Failed to destroy network for sandbox \"d6f924c0b66f6453adf966488b6233743c5b435cdb64a8bf6fb0c404de703152\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.899529 systemd[1]: run-netns-cni\x2dc79c8b75\x2dad0c\x2df8cc\x2dd144\x2d42a04227eb48.mount: Deactivated successfully. Jan 28 00:49:31.903432 containerd[1882]: time="2026-01-28T00:49:31.903391522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-646548674d-lzmnt,Uid:542feb1a-1c08-4a08-96ca-01553cfa6389,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f924c0b66f6453adf966488b6233743c5b435cdb64a8bf6fb0c404de703152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.904084 kubelet[3415]: E0128 00:49:31.903576 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f924c0b66f6453adf966488b6233743c5b435cdb64a8bf6fb0c404de703152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.904084 kubelet[3415]: E0128 00:49:31.903626 3415 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f924c0b66f6453adf966488b6233743c5b435cdb64a8bf6fb0c404de703152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" Jan 28 00:49:31.904084 kubelet[3415]: E0128 00:49:31.903643 3415 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6f924c0b66f6453adf966488b6233743c5b435cdb64a8bf6fb0c404de703152\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" Jan 28 00:49:31.904288 kubelet[3415]: E0128 00:49:31.903676 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-646548674d-lzmnt_calico-system(542feb1a-1c08-4a08-96ca-01553cfa6389)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-646548674d-lzmnt_calico-system(542feb1a-1c08-4a08-96ca-01553cfa6389)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6f924c0b66f6453adf966488b6233743c5b435cdb64a8bf6fb0c404de703152\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:49:31.908659 containerd[1882]: time="2026-01-28T00:49:31.908633381Z" level=error msg="Failed to destroy network for sandbox \"be4f049edde660a754eb1da4f99ca73f3646c2c406f7d64d610229df1c04fd7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.910213 systemd[1]: run-netns-cni\x2dc87a8a38\x2d8149\x2d2561\x2da8e9\x2d1bc29d7e5bbd.mount: Deactivated successfully. Jan 28 00:49:31.914245 containerd[1882]: time="2026-01-28T00:49:31.914206217Z" level=error msg="Failed to destroy network for sandbox \"712826e0210e14c2b52599bb20e0df1a5e0c53044fe35effdd632c8c66f5f68b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.916781 containerd[1882]: time="2026-01-28T00:49:31.916706851Z" level=error msg="Failed to destroy network for sandbox \"8cf2ddce8c3723272eba3e73a1fc2f28316abc0c8e7fd3dc673f9499cc64edc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.918476 containerd[1882]: time="2026-01-28T00:49:31.918448006Z" level=error msg="Failed to destroy network for sandbox \"dc4fb0832ca990816d546e01806b9e1c44e1934871c5214022ee192b339ca6cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.920221 containerd[1882]: time="2026-01-28T00:49:31.920077703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cmh7z,Uid:38f32bfc-20b9-44e6-8109-aef9d5b3412a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be4f049edde660a754eb1da4f99ca73f3646c2c406f7d64d610229df1c04fd7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.921111 kubelet[3415]: E0128 00:49:31.920850 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be4f049edde660a754eb1da4f99ca73f3646c2c406f7d64d610229df1c04fd7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.921111 kubelet[3415]: E0128 00:49:31.920922 3415 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be4f049edde660a754eb1da4f99ca73f3646c2c406f7d64d610229df1c04fd7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cmh7z" Jan 28 00:49:31.921111 kubelet[3415]: E0128 00:49:31.920937 3415 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be4f049edde660a754eb1da4f99ca73f3646c2c406f7d64d610229df1c04fd7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cmh7z" Jan 28 00:49:31.921220 kubelet[3415]: E0128 00:49:31.920991 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cmh7z_kube-system(38f32bfc-20b9-44e6-8109-aef9d5b3412a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cmh7z_kube-system(38f32bfc-20b9-44e6-8109-aef9d5b3412a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be4f049edde660a754eb1da4f99ca73f3646c2c406f7d64d610229df1c04fd7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cmh7z" podUID="38f32bfc-20b9-44e6-8109-aef9d5b3412a" Jan 28 00:49:31.923512 containerd[1882]: time="2026-01-28T00:49:31.923477027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdqnw,Uid:c202d96b-00b7-4bac-b3b9-1cca06ac3857,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"712826e0210e14c2b52599bb20e0df1a5e0c53044fe35effdd632c8c66f5f68b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.923958 kubelet[3415]: E0128 00:49:31.923830 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"712826e0210e14c2b52599bb20e0df1a5e0c53044fe35effdd632c8c66f5f68b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.923958 kubelet[3415]: E0128 00:49:31.923876 3415 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"712826e0210e14c2b52599bb20e0df1a5e0c53044fe35effdd632c8c66f5f68b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gdqnw" Jan 28 00:49:31.923958 kubelet[3415]: E0128 00:49:31.923889 3415 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"712826e0210e14c2b52599bb20e0df1a5e0c53044fe35effdd632c8c66f5f68b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gdqnw" Jan 28 00:49:31.924167 kubelet[3415]: E0128 00:49:31.924144 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gdqnw_kube-system(c202d96b-00b7-4bac-b3b9-1cca06ac3857)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gdqnw_kube-system(c202d96b-00b7-4bac-b3b9-1cca06ac3857)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"712826e0210e14c2b52599bb20e0df1a5e0c53044fe35effdd632c8c66f5f68b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gdqnw" podUID="c202d96b-00b7-4bac-b3b9-1cca06ac3857" Jan 28 00:49:31.928008 containerd[1882]: time="2026-01-28T00:49:31.927978344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-thw89,Uid:f44a3119-d828-410a-8e9a-303c84462c56,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf2ddce8c3723272eba3e73a1fc2f28316abc0c8e7fd3dc673f9499cc64edc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.928388 kubelet[3415]: E0128 00:49:31.928292 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf2ddce8c3723272eba3e73a1fc2f28316abc0c8e7fd3dc673f9499cc64edc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.928388 kubelet[3415]: E0128 00:49:31.928320 3415 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf2ddce8c3723272eba3e73a1fc2f28316abc0c8e7fd3dc673f9499cc64edc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-thw89" Jan 28 00:49:31.928388 kubelet[3415]: E0128 00:49:31.928336 3415 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cf2ddce8c3723272eba3e73a1fc2f28316abc0c8e7fd3dc673f9499cc64edc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-thw89" Jan 28 00:49:31.928477 kubelet[3415]: E0128 00:49:31.928358 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-thw89_calico-system(f44a3119-d828-410a-8e9a-303c84462c56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-thw89_calico-system(f44a3119-d828-410a-8e9a-303c84462c56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8cf2ddce8c3723272eba3e73a1fc2f28316abc0c8e7fd3dc673f9499cc64edc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:49:31.932015 containerd[1882]: time="2026-01-28T00:49:31.931957189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b468596cf-28ns6,Uid:098ac4f0-5200-473e-8687-26a347b0e3eb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc4fb0832ca990816d546e01806b9e1c44e1934871c5214022ee192b339ca6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.932279 kubelet[3415]: E0128 00:49:31.932162 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc4fb0832ca990816d546e01806b9e1c44e1934871c5214022ee192b339ca6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:31.932279 kubelet[3415]: E0128 00:49:31.932193 3415 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc4fb0832ca990816d546e01806b9e1c44e1934871c5214022ee192b339ca6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" Jan 28 00:49:31.932279 kubelet[3415]: E0128 00:49:31.932206 3415 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc4fb0832ca990816d546e01806b9e1c44e1934871c5214022ee192b339ca6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" Jan 28 00:49:31.932362 kubelet[3415]: E0128 00:49:31.932226 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b468596cf-28ns6_calico-apiserver(098ac4f0-5200-473e-8687-26a347b0e3eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b468596cf-28ns6_calico-apiserver(098ac4f0-5200-473e-8687-26a347b0e3eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc4fb0832ca990816d546e01806b9e1c44e1934871c5214022ee192b339ca6cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:49:32.278804 systemd[1]: Created slice kubepods-besteffort-podbcaf25ee_c8ae_4368_867f_6ea868477814.slice - libcontainer container kubepods-besteffort-podbcaf25ee_c8ae_4368_867f_6ea868477814.slice. Jan 28 00:49:32.282253 containerd[1882]: time="2026-01-28T00:49:32.282214513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzfbp,Uid:bcaf25ee-c8ae-4368-867f-6ea868477814,Namespace:calico-system,Attempt:0,}" Jan 28 00:49:32.324569 containerd[1882]: time="2026-01-28T00:49:32.324512769Z" level=error msg="Failed to destroy network for sandbox \"669a1576a75bf8002954d38d6d0465bfa692df4e8b3e68a48b28a4a7a4dcfc5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:32.329595 containerd[1882]: time="2026-01-28T00:49:32.329543997Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzfbp,Uid:bcaf25ee-c8ae-4368-867f-6ea868477814,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"669a1576a75bf8002954d38d6d0465bfa692df4e8b3e68a48b28a4a7a4dcfc5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:32.330279 kubelet[3415]: E0128 00:49:32.329760 3415 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"669a1576a75bf8002954d38d6d0465bfa692df4e8b3e68a48b28a4a7a4dcfc5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:49:32.330279 kubelet[3415]: E0128 00:49:32.329820 3415 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"669a1576a75bf8002954d38d6d0465bfa692df4e8b3e68a48b28a4a7a4dcfc5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mzfbp" Jan 28 00:49:32.330279 kubelet[3415]: E0128 00:49:32.329836 3415 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"669a1576a75bf8002954d38d6d0465bfa692df4e8b3e68a48b28a4a7a4dcfc5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mzfbp" Jan 28 00:49:32.330388 kubelet[3415]: E0128 00:49:32.329876 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mzfbp_calico-system(bcaf25ee-c8ae-4368-867f-6ea868477814)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mzfbp_calico-system(bcaf25ee-c8ae-4368-867f-6ea868477814)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"669a1576a75bf8002954d38d6d0465bfa692df4e8b3e68a48b28a4a7a4dcfc5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:49:32.384523 containerd[1882]: time="2026-01-28T00:49:32.384385775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 00:49:32.791345 systemd[1]: run-netns-cni\x2d65c825b0\x2d2dc8\x2dd80f\x2d46d2\x2d1544151f3490.mount: Deactivated successfully. Jan 28 00:49:32.791421 systemd[1]: run-netns-cni\x2d9a52e279\x2d6b3a\x2dd343\x2de8de\x2d4422cde05b22.mount: Deactivated successfully. Jan 28 00:49:32.791453 systemd[1]: run-netns-cni\x2d88e54cf5\x2dabb1\x2d26c5\x2da1f7\x2d3e71b2baac2c.mount: Deactivated successfully. Jan 28 00:49:36.018033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1609612435.mount: Deactivated successfully. Jan 28 00:49:36.411773 containerd[1882]: time="2026-01-28T00:49:36.411242903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:36.414593 containerd[1882]: time="2026-01-28T00:49:36.414563052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 28 00:49:36.418295 containerd[1882]: time="2026-01-28T00:49:36.418248948Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:36.422750 containerd[1882]: time="2026-01-28T00:49:36.422389859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:49:36.422750 containerd[1882]: time="2026-01-28T00:49:36.422665995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.038222611s" Jan 28 00:49:36.422750 containerd[1882]: time="2026-01-28T00:49:36.422684860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 28 00:49:36.453438 containerd[1882]: time="2026-01-28T00:49:36.453153397Z" level=info msg="CreateContainer within sandbox \"ab76223edb68df54d365bb419236d96e6704a0122731b71f2c56fca7b817c78b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 00:49:36.490220 containerd[1882]: time="2026-01-28T00:49:36.490185400Z" level=info msg="Container 268ed2ee68487f2eba851d1068b144038f008cea33b554c9da6e1784173f7b92: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:49:36.492602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount118355495.mount: Deactivated successfully. Jan 28 00:49:36.517688 containerd[1882]: time="2026-01-28T00:49:36.517647454Z" level=info msg="CreateContainer within sandbox \"ab76223edb68df54d365bb419236d96e6704a0122731b71f2c56fca7b817c78b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"268ed2ee68487f2eba851d1068b144038f008cea33b554c9da6e1784173f7b92\"" Jan 28 00:49:36.518949 containerd[1882]: time="2026-01-28T00:49:36.518386228Z" level=info msg="StartContainer for \"268ed2ee68487f2eba851d1068b144038f008cea33b554c9da6e1784173f7b92\"" Jan 28 00:49:36.519613 containerd[1882]: time="2026-01-28T00:49:36.519589233Z" level=info msg="connecting to shim 268ed2ee68487f2eba851d1068b144038f008cea33b554c9da6e1784173f7b92" address="unix:///run/containerd/s/b8796dd3efa17dc358dfadcf065921f403e69f9f05e1b1ba8ebb8f75364e3794" protocol=ttrpc version=3 Jan 28 00:49:36.535057 systemd[1]: Started cri-containerd-268ed2ee68487f2eba851d1068b144038f008cea33b554c9da6e1784173f7b92.scope - libcontainer container 268ed2ee68487f2eba851d1068b144038f008cea33b554c9da6e1784173f7b92. Jan 28 00:49:36.615280 containerd[1882]: time="2026-01-28T00:49:36.615244240Z" level=info msg="StartContainer for \"268ed2ee68487f2eba851d1068b144038f008cea33b554c9da6e1784173f7b92\" returns successfully" Jan 28 00:49:36.850572 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 00:49:36.850683 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 00:49:37.020214 kubelet[3415]: I0128 00:49:37.020177 3415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gbdr\" (UniqueName: \"kubernetes.io/projected/e1d36a1c-6e80-479f-89ad-90af3d46bf24-kube-api-access-8gbdr\") pod \"e1d36a1c-6e80-479f-89ad-90af3d46bf24\" (UID: \"e1d36a1c-6e80-479f-89ad-90af3d46bf24\") " Jan 28 00:49:37.020214 kubelet[3415]: I0128 00:49:37.020216 3415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1d36a1c-6e80-479f-89ad-90af3d46bf24-whisker-ca-bundle\") pod \"e1d36a1c-6e80-479f-89ad-90af3d46bf24\" (UID: \"e1d36a1c-6e80-479f-89ad-90af3d46bf24\") " Jan 28 00:49:37.020549 kubelet[3415]: I0128 00:49:37.020233 3415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1d36a1c-6e80-479f-89ad-90af3d46bf24-whisker-backend-key-pair\") pod \"e1d36a1c-6e80-479f-89ad-90af3d46bf24\" (UID: \"e1d36a1c-6e80-479f-89ad-90af3d46bf24\") " Jan 28 00:49:37.025294 systemd[1]: var-lib-kubelet-pods-e1d36a1c\x2d6e80\x2d479f\x2d89ad\x2d90af3d46bf24-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8gbdr.mount: Deactivated successfully. Jan 28 00:49:37.025385 systemd[1]: var-lib-kubelet-pods-e1d36a1c\x2d6e80\x2d479f\x2d89ad\x2d90af3d46bf24-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 00:49:37.026398 kubelet[3415]: I0128 00:49:37.025050 3415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d36a1c-6e80-479f-89ad-90af3d46bf24-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e1d36a1c-6e80-479f-89ad-90af3d46bf24" (UID: "e1d36a1c-6e80-479f-89ad-90af3d46bf24"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 00:49:37.029839 kubelet[3415]: I0128 00:49:37.029805 3415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d36a1c-6e80-479f-89ad-90af3d46bf24-kube-api-access-8gbdr" (OuterVolumeSpecName: "kube-api-access-8gbdr") pod "e1d36a1c-6e80-479f-89ad-90af3d46bf24" (UID: "e1d36a1c-6e80-479f-89ad-90af3d46bf24"). InnerVolumeSpecName "kube-api-access-8gbdr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 00:49:37.029927 kubelet[3415]: I0128 00:49:37.029885 3415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d36a1c-6e80-479f-89ad-90af3d46bf24-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e1d36a1c-6e80-479f-89ad-90af3d46bf24" (UID: "e1d36a1c-6e80-479f-89ad-90af3d46bf24"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 00:49:37.121473 kubelet[3415]: I0128 00:49:37.121367 3415 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8gbdr\" (UniqueName: \"kubernetes.io/projected/e1d36a1c-6e80-479f-89ad-90af3d46bf24-kube-api-access-8gbdr\") on node \"ci-4459.2.3-n-ee3b3e4916\" DevicePath \"\"" Jan 28 00:49:37.121473 kubelet[3415]: I0128 00:49:37.121401 3415 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e1d36a1c-6e80-479f-89ad-90af3d46bf24-whisker-backend-key-pair\") on node \"ci-4459.2.3-n-ee3b3e4916\" DevicePath \"\"" Jan 28 00:49:37.121473 kubelet[3415]: I0128 00:49:37.121411 3415 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1d36a1c-6e80-479f-89ad-90af3d46bf24-whisker-ca-bundle\") on node \"ci-4459.2.3-n-ee3b3e4916\" DevicePath \"\"" Jan 28 00:49:37.281696 systemd[1]: Removed slice kubepods-besteffort-pode1d36a1c_6e80_479f_89ad_90af3d46bf24.slice - libcontainer container kubepods-besteffort-pode1d36a1c_6e80_479f_89ad_90af3d46bf24.slice. Jan 28 00:49:37.424085 kubelet[3415]: I0128 00:49:37.423226 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t4vxq" podStartSLOduration=2.219997074 podStartE2EDuration="16.423211599s" podCreationTimestamp="2026-01-28 00:49:21 +0000 UTC" firstStartedPulling="2026-01-28 00:49:22.220055777 +0000 UTC m=+23.128368152" lastFinishedPulling="2026-01-28 00:49:36.42327031 +0000 UTC m=+37.331582677" observedRunningTime="2026-01-28 00:49:37.422215729 +0000 UTC m=+38.330528104" watchObservedRunningTime="2026-01-28 00:49:37.423211599 +0000 UTC m=+38.331523982" Jan 28 00:49:37.511785 systemd[1]: Created slice kubepods-besteffort-pod2b821f34_27f6_484c_9dd8_726df28b75d8.slice - libcontainer container kubepods-besteffort-pod2b821f34_27f6_484c_9dd8_726df28b75d8.slice. Jan 28 00:49:37.525310 kubelet[3415]: I0128 00:49:37.525259 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b821f34-27f6-484c-9dd8-726df28b75d8-whisker-ca-bundle\") pod \"whisker-5fcf746948-lbd7r\" (UID: \"2b821f34-27f6-484c-9dd8-726df28b75d8\") " pod="calico-system/whisker-5fcf746948-lbd7r" Jan 28 00:49:37.525310 kubelet[3415]: I0128 00:49:37.525294 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th6hl\" (UniqueName: \"kubernetes.io/projected/2b821f34-27f6-484c-9dd8-726df28b75d8-kube-api-access-th6hl\") pod \"whisker-5fcf746948-lbd7r\" (UID: \"2b821f34-27f6-484c-9dd8-726df28b75d8\") " pod="calico-system/whisker-5fcf746948-lbd7r" Jan 28 00:49:37.525310 kubelet[3415]: I0128 00:49:37.525311 3415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2b821f34-27f6-484c-9dd8-726df28b75d8-whisker-backend-key-pair\") pod \"whisker-5fcf746948-lbd7r\" (UID: \"2b821f34-27f6-484c-9dd8-726df28b75d8\") " pod="calico-system/whisker-5fcf746948-lbd7r" Jan 28 00:49:37.816919 containerd[1882]: time="2026-01-28T00:49:37.816571723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcf746948-lbd7r,Uid:2b821f34-27f6-484c-9dd8-726df28b75d8,Namespace:calico-system,Attempt:0,}" Jan 28 00:49:37.970632 systemd-networkd[1473]: cali832b755c254: Link UP Jan 28 00:49:37.971340 systemd-networkd[1473]: cali832b755c254: Gained carrier Jan 28 00:49:37.993974 containerd[1882]: 2026-01-28 00:49:37.844 [INFO][4492] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 00:49:37.993974 containerd[1882]: 2026-01-28 00:49:37.864 [INFO][4492] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-eth0 whisker-5fcf746948- calico-system 2b821f34-27f6-484c-9dd8-726df28b75d8 863 0 2026-01-28 00:49:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5fcf746948 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.3-n-ee3b3e4916 whisker-5fcf746948-lbd7r eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali832b755c254 [] [] }} ContainerID="e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" Namespace="calico-system" Pod="whisker-5fcf746948-lbd7r" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-" Jan 28 00:49:37.993974 containerd[1882]: 2026-01-28 00:49:37.864 [INFO][4492] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" Namespace="calico-system" Pod="whisker-5fcf746948-lbd7r" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-eth0" Jan 28 00:49:37.993974 containerd[1882]: 2026-01-28 00:49:37.883 [INFO][4503] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" HandleID="k8s-pod-network.e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-eth0" Jan 28 00:49:37.994198 containerd[1882]: 2026-01-28 00:49:37.883 [INFO][4503] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" HandleID="k8s-pod-network.e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b190), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-n-ee3b3e4916", "pod":"whisker-5fcf746948-lbd7r", "timestamp":"2026-01-28 00:49:37.883417763 +0000 UTC"}, Hostname:"ci-4459.2.3-n-ee3b3e4916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:49:37.994198 containerd[1882]: 2026-01-28 00:49:37.883 [INFO][4503] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:49:37.994198 containerd[1882]: 2026-01-28 00:49:37.883 [INFO][4503] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:49:37.994198 containerd[1882]: 2026-01-28 00:49:37.883 [INFO][4503] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-ee3b3e4916' Jan 28 00:49:37.994198 containerd[1882]: 2026-01-28 00:49:37.889 [INFO][4503] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:37.994198 containerd[1882]: 2026-01-28 00:49:37.893 [INFO][4503] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:37.994198 containerd[1882]: 2026-01-28 00:49:37.896 [INFO][4503] ipam/ipam.go 511: Trying affinity for 192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:37.994198 containerd[1882]: 2026-01-28 00:49:37.898 [INFO][4503] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:37.994198 containerd[1882]: 2026-01-28 00:49:37.899 [INFO][4503] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:37.994330 containerd[1882]: 2026-01-28 00:49:37.900 [INFO][4503] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:37.994330 containerd[1882]: 2026-01-28 00:49:37.901 [INFO][4503] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd Jan 28 00:49:37.994330 containerd[1882]: 2026-01-28 00:49:37.906 [INFO][4503] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:37.994330 containerd[1882]: 2026-01-28 00:49:37.916 [INFO][4503] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.65/26] block=192.168.83.64/26 handle="k8s-pod-network.e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:37.994330 containerd[1882]: 2026-01-28 00:49:37.916 [INFO][4503] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.65/26] handle="k8s-pod-network.e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:37.994330 containerd[1882]: 2026-01-28 00:49:37.916 [INFO][4503] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:49:37.994330 containerd[1882]: 2026-01-28 00:49:37.916 [INFO][4503] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.65/26] IPv6=[] ContainerID="e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" HandleID="k8s-pod-network.e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-eth0" Jan 28 00:49:37.994424 containerd[1882]: 2026-01-28 00:49:37.918 [INFO][4492] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" Namespace="calico-system" Pod="whisker-5fcf746948-lbd7r" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-eth0", GenerateName:"whisker-5fcf746948-", Namespace:"calico-system", SelfLink:"", UID:"2b821f34-27f6-484c-9dd8-726df28b75d8", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fcf746948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"", Pod:"whisker-5fcf746948-lbd7r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.83.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali832b755c254", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:37.994424 containerd[1882]: 2026-01-28 00:49:37.918 [INFO][4492] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.65/32] ContainerID="e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" Namespace="calico-system" Pod="whisker-5fcf746948-lbd7r" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-eth0" Jan 28 00:49:37.994469 containerd[1882]: 2026-01-28 00:49:37.918 [INFO][4492] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali832b755c254 ContainerID="e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" Namespace="calico-system" Pod="whisker-5fcf746948-lbd7r" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-eth0" Jan 28 00:49:37.994469 containerd[1882]: 2026-01-28 00:49:37.971 [INFO][4492] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" Namespace="calico-system" Pod="whisker-5fcf746948-lbd7r" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-eth0" Jan 28 00:49:37.994498 containerd[1882]: 2026-01-28 00:49:37.972 [INFO][4492] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" Namespace="calico-system" Pod="whisker-5fcf746948-lbd7r" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-eth0", GenerateName:"whisker-5fcf746948-", Namespace:"calico-system", SelfLink:"", UID:"2b821f34-27f6-484c-9dd8-726df28b75d8", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5fcf746948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd", Pod:"whisker-5fcf746948-lbd7r", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.83.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali832b755c254", MAC:"36:2c:44:dc:96:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:37.994530 containerd[1882]: 2026-01-28 00:49:37.989 [INFO][4492] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" Namespace="calico-system" Pod="whisker-5fcf746948-lbd7r" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-whisker--5fcf746948--lbd7r-eth0" Jan 28 00:49:38.068594 containerd[1882]: time="2026-01-28T00:49:38.067664545Z" level=info msg="connecting to shim e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd" address="unix:///run/containerd/s/0c5746aed3675439fac9d528b1611bc440355b2d169cfc2d2019a762a811cb90" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:49:38.095197 systemd[1]: Started cri-containerd-e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd.scope - libcontainer container e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd. Jan 28 00:49:38.152747 containerd[1882]: time="2026-01-28T00:49:38.152696172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fcf746948-lbd7r,Uid:2b821f34-27f6-484c-9dd8-726df28b75d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"e26230d9f3ad69ed171c59276d1be0a4a203e3d7bd72bd4dfe56873a890277cd\"" Jan 28 00:49:38.155049 containerd[1882]: time="2026-01-28T00:49:38.154564373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:49:38.555937 containerd[1882]: time="2026-01-28T00:49:38.555791041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:49:38.853033 containerd[1882]: time="2026-01-28T00:49:38.852524280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 00:49:38.853033 containerd[1882]: time="2026-01-28T00:49:38.852621139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:49:38.853355 kubelet[3415]: E0128 00:49:38.852816 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:49:38.853355 kubelet[3415]: E0128 00:49:38.852867 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:49:38.856053 kubelet[3415]: E0128 00:49:38.856003 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:79d9755e231c4919900bab3892802a38,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-th6hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf746948-lbd7r_calico-system(2b821f34-27f6-484c-9dd8-726df28b75d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:49:38.858456 containerd[1882]: time="2026-01-28T00:49:38.858334329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:49:38.928091 systemd-networkd[1473]: vxlan.calico: Link UP Jan 28 00:49:38.928098 systemd-networkd[1473]: vxlan.calico: Gained carrier Jan 28 00:49:39.226214 containerd[1882]: time="2026-01-28T00:49:39.225990620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:49:39.259549 containerd[1882]: time="2026-01-28T00:49:39.259467010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:49:39.260950 containerd[1882]: time="2026-01-28T00:49:39.259699561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 00:49:39.261044 kubelet[3415]: E0128 00:49:39.259874 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:49:39.261044 kubelet[3415]: E0128 00:49:39.259931 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:49:39.261108 kubelet[3415]: E0128 00:49:39.260020 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-th6hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf746948-lbd7r_calico-system(2b821f34-27f6-484c-9dd8-726df28b75d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:49:39.261404 kubelet[3415]: E0128 00:49:39.261356 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:49:39.276523 kubelet[3415]: I0128 00:49:39.276358 3415 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d36a1c-6e80-479f-89ad-90af3d46bf24" path="/var/lib/kubelet/pods/e1d36a1c-6e80-479f-89ad-90af3d46bf24/volumes" Jan 28 00:49:39.420831 kubelet[3415]: E0128 00:49:39.420759 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:49:40.040127 systemd-networkd[1473]: cali832b755c254: Gained IPv6LL Jan 28 00:49:40.168102 systemd-networkd[1473]: vxlan.calico: Gained IPv6LL Jan 28 00:49:44.274015 containerd[1882]: time="2026-01-28T00:49:44.273975920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cmh7z,Uid:38f32bfc-20b9-44e6-8109-aef9d5b3412a,Namespace:kube-system,Attempt:0,}" Jan 28 00:49:44.374350 systemd-networkd[1473]: cali2b9608e3ab3: Link UP Jan 28 00:49:44.374514 systemd-networkd[1473]: cali2b9608e3ab3: Gained carrier Jan 28 00:49:44.393511 containerd[1882]: 2026-01-28 00:49:44.315 [INFO][4834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-eth0 coredns-668d6bf9bc- kube-system 38f32bfc-20b9-44e6-8109-aef9d5b3412a 802 0 2026-01-28 00:49:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.3-n-ee3b3e4916 coredns-668d6bf9bc-cmh7z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2b9608e3ab3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" Namespace="kube-system" Pod="coredns-668d6bf9bc-cmh7z" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-" Jan 28 00:49:44.393511 containerd[1882]: 2026-01-28 00:49:44.315 [INFO][4834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" Namespace="kube-system" Pod="coredns-668d6bf9bc-cmh7z" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-eth0" Jan 28 00:49:44.393511 containerd[1882]: 2026-01-28 00:49:44.334 [INFO][4850] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" HandleID="k8s-pod-network.150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-eth0" Jan 28 00:49:44.394171 containerd[1882]: 2026-01-28 00:49:44.335 [INFO][4850] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" HandleID="k8s-pod-network.150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.3-n-ee3b3e4916", "pod":"coredns-668d6bf9bc-cmh7z", "timestamp":"2026-01-28 00:49:44.334991941 +0000 UTC"}, Hostname:"ci-4459.2.3-n-ee3b3e4916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:49:44.394171 containerd[1882]: 2026-01-28 00:49:44.335 [INFO][4850] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:49:44.394171 containerd[1882]: 2026-01-28 00:49:44.335 [INFO][4850] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:49:44.394171 containerd[1882]: 2026-01-28 00:49:44.335 [INFO][4850] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-ee3b3e4916' Jan 28 00:49:44.394171 containerd[1882]: 2026-01-28 00:49:44.340 [INFO][4850] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:44.394171 containerd[1882]: 2026-01-28 00:49:44.344 [INFO][4850] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:44.394171 containerd[1882]: 2026-01-28 00:49:44.348 [INFO][4850] ipam/ipam.go 511: Trying affinity for 192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:44.394171 containerd[1882]: 2026-01-28 00:49:44.349 [INFO][4850] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:44.394171 containerd[1882]: 2026-01-28 00:49:44.352 [INFO][4850] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:44.394328 containerd[1882]: 2026-01-28 00:49:44.352 [INFO][4850] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:44.394328 containerd[1882]: 2026-01-28 00:49:44.353 [INFO][4850] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02 Jan 28 00:49:44.394328 containerd[1882]: 2026-01-28 00:49:44.358 [INFO][4850] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:44.394328 containerd[1882]: 2026-01-28 00:49:44.367 [INFO][4850] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.66/26] block=192.168.83.64/26 handle="k8s-pod-network.150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:44.394328 containerd[1882]: 2026-01-28 00:49:44.368 [INFO][4850] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.66/26] handle="k8s-pod-network.150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:44.394328 containerd[1882]: 2026-01-28 00:49:44.368 [INFO][4850] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:49:44.394328 containerd[1882]: 2026-01-28 00:49:44.368 [INFO][4850] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.66/26] IPv6=[] ContainerID="150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" HandleID="k8s-pod-network.150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-eth0" Jan 28 00:49:44.394425 containerd[1882]: 2026-01-28 00:49:44.370 [INFO][4834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" Namespace="kube-system" Pod="coredns-668d6bf9bc-cmh7z" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"38f32bfc-20b9-44e6-8109-aef9d5b3412a", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"", Pod:"coredns-668d6bf9bc-cmh7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b9608e3ab3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:44.394425 containerd[1882]: 2026-01-28 00:49:44.371 [INFO][4834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.66/32] ContainerID="150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" Namespace="kube-system" Pod="coredns-668d6bf9bc-cmh7z" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-eth0" Jan 28 00:49:44.394425 containerd[1882]: 2026-01-28 00:49:44.371 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b9608e3ab3 ContainerID="150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" Namespace="kube-system" Pod="coredns-668d6bf9bc-cmh7z" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-eth0" Jan 28 00:49:44.394425 containerd[1882]: 2026-01-28 00:49:44.375 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" Namespace="kube-system" Pod="coredns-668d6bf9bc-cmh7z" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-eth0" Jan 28 00:49:44.394425 containerd[1882]: 2026-01-28 00:49:44.376 [INFO][4834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" Namespace="kube-system" Pod="coredns-668d6bf9bc-cmh7z" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"38f32bfc-20b9-44e6-8109-aef9d5b3412a", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02", Pod:"coredns-668d6bf9bc-cmh7z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b9608e3ab3", MAC:"a6:39:34:57:f6:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:44.394425 containerd[1882]: 2026-01-28 00:49:44.391 [INFO][4834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" Namespace="kube-system" Pod="coredns-668d6bf9bc-cmh7z" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--cmh7z-eth0" Jan 28 00:49:44.445002 containerd[1882]: time="2026-01-28T00:49:44.444926525Z" level=info msg="connecting to shim 150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02" address="unix:///run/containerd/s/854bae66bbae3b8bc913381222a855bdfe6276d834e8778754eed78f14be01d6" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:49:44.467065 systemd[1]: Started cri-containerd-150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02.scope - libcontainer container 150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02. Jan 28 00:49:44.498522 containerd[1882]: time="2026-01-28T00:49:44.498487435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cmh7z,Uid:38f32bfc-20b9-44e6-8109-aef9d5b3412a,Namespace:kube-system,Attempt:0,} returns sandbox id \"150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02\"" Jan 28 00:49:44.501870 containerd[1882]: time="2026-01-28T00:49:44.501756469Z" level=info msg="CreateContainer within sandbox \"150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:49:44.541268 containerd[1882]: time="2026-01-28T00:49:44.541170469Z" level=info msg="Container 848f61fb6760aef61fcb44a187ac8d2e3fe9aa63474b6d025bdc08d813400f08: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:49:44.560157 containerd[1882]: time="2026-01-28T00:49:44.560119323Z" level=info msg="CreateContainer within sandbox \"150175a0e1777a65921669b2423fad001f96f0c717b8e245f06f3e54736c8f02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"848f61fb6760aef61fcb44a187ac8d2e3fe9aa63474b6d025bdc08d813400f08\"" Jan 28 00:49:44.561291 containerd[1882]: time="2026-01-28T00:49:44.560799127Z" level=info msg="StartContainer for \"848f61fb6760aef61fcb44a187ac8d2e3fe9aa63474b6d025bdc08d813400f08\"" Jan 28 00:49:44.562917 containerd[1882]: time="2026-01-28T00:49:44.562884405Z" level=info msg="connecting to shim 848f61fb6760aef61fcb44a187ac8d2e3fe9aa63474b6d025bdc08d813400f08" address="unix:///run/containerd/s/854bae66bbae3b8bc913381222a855bdfe6276d834e8778754eed78f14be01d6" protocol=ttrpc version=3 Jan 28 00:49:44.580042 systemd[1]: Started cri-containerd-848f61fb6760aef61fcb44a187ac8d2e3fe9aa63474b6d025bdc08d813400f08.scope - libcontainer container 848f61fb6760aef61fcb44a187ac8d2e3fe9aa63474b6d025bdc08d813400f08. Jan 28 00:49:44.608015 containerd[1882]: time="2026-01-28T00:49:44.607978559Z" level=info msg="StartContainer for \"848f61fb6760aef61fcb44a187ac8d2e3fe9aa63474b6d025bdc08d813400f08\" returns successfully" Jan 28 00:49:45.274945 containerd[1882]: time="2026-01-28T00:49:45.274638957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdqnw,Uid:c202d96b-00b7-4bac-b3b9-1cca06ac3857,Namespace:kube-system,Attempt:0,}" Jan 28 00:49:45.275396 containerd[1882]: time="2026-01-28T00:49:45.275114131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-646548674d-lzmnt,Uid:542feb1a-1c08-4a08-96ca-01553cfa6389,Namespace:calico-system,Attempt:0,}" Jan 28 00:49:45.394641 systemd-networkd[1473]: calia295ec6518e: Link UP Jan 28 00:49:45.394745 systemd-networkd[1473]: calia295ec6518e: Gained carrier Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.323 [INFO][4948] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-eth0 coredns-668d6bf9bc- kube-system c202d96b-00b7-4bac-b3b9-1cca06ac3857 803 0 2026-01-28 00:49:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.3-n-ee3b3e4916 coredns-668d6bf9bc-gdqnw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia295ec6518e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdqnw" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-" Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.323 [INFO][4948] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdqnw" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-eth0" Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.351 [INFO][4972] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" HandleID="k8s-pod-network.f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-eth0" Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.351 [INFO][4972] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" HandleID="k8s-pod-network.f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2e70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.3-n-ee3b3e4916", "pod":"coredns-668d6bf9bc-gdqnw", "timestamp":"2026-01-28 00:49:45.351036125 +0000 UTC"}, Hostname:"ci-4459.2.3-n-ee3b3e4916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.351 [INFO][4972] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.351 [INFO][4972] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.351 [INFO][4972] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-ee3b3e4916' Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.359 [INFO][4972] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.363 [INFO][4972] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.367 [INFO][4972] ipam/ipam.go 511: Trying affinity for 192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.368 [INFO][4972] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.370 [INFO][4972] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.370 [INFO][4972] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.372 [INFO][4972] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96 Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.376 [INFO][4972] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.386 [INFO][4972] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.67/26] block=192.168.83.64/26 handle="k8s-pod-network.f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.386 [INFO][4972] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.67/26] handle="k8s-pod-network.f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.386 [INFO][4972] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:49:45.414498 containerd[1882]: 2026-01-28 00:49:45.386 [INFO][4972] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.67/26] IPv6=[] ContainerID="f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" HandleID="k8s-pod-network.f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-eth0" Jan 28 00:49:45.415690 containerd[1882]: 2026-01-28 00:49:45.388 [INFO][4948] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdqnw" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c202d96b-00b7-4bac-b3b9-1cca06ac3857", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"", Pod:"coredns-668d6bf9bc-gdqnw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia295ec6518e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:45.415690 containerd[1882]: 2026-01-28 00:49:45.388 [INFO][4948] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.67/32] ContainerID="f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdqnw" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-eth0" Jan 28 00:49:45.415690 containerd[1882]: 2026-01-28 00:49:45.389 [INFO][4948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia295ec6518e ContainerID="f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdqnw" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-eth0" Jan 28 00:49:45.415690 containerd[1882]: 2026-01-28 00:49:45.396 [INFO][4948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdqnw" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-eth0" Jan 28 00:49:45.415690 containerd[1882]: 2026-01-28 00:49:45.396 [INFO][4948] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdqnw" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c202d96b-00b7-4bac-b3b9-1cca06ac3857", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96", Pod:"coredns-668d6bf9bc-gdqnw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia295ec6518e", MAC:"06:0a:d4:46:58:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:45.415690 containerd[1882]: 2026-01-28 00:49:45.412 [INFO][4948] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" Namespace="kube-system" Pod="coredns-668d6bf9bc-gdqnw" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-coredns--668d6bf9bc--gdqnw-eth0" Jan 28 00:49:45.469138 kubelet[3415]: I0128 00:49:45.468928 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cmh7z" podStartSLOduration=39.468897834 podStartE2EDuration="39.468897834s" podCreationTimestamp="2026-01-28 00:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:49:45.446207269 +0000 UTC m=+46.354519644" watchObservedRunningTime="2026-01-28 00:49:45.468897834 +0000 UTC m=+46.377210209" Jan 28 00:49:45.506516 containerd[1882]: time="2026-01-28T00:49:45.506018262Z" level=info msg="connecting to shim f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96" address="unix:///run/containerd/s/8da8738ff5d568eccb610b89fad44affece9724999743e70181f00a078a6a25c" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:49:45.529145 systemd[1]: Started cri-containerd-f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96.scope - libcontainer container f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96. Jan 28 00:49:45.531715 systemd-networkd[1473]: cali45734e03e74: Link UP Jan 28 00:49:45.532432 systemd-networkd[1473]: cali45734e03e74: Gained carrier Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.334 [INFO][4960] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-eth0 calico-kube-controllers-646548674d- calico-system 542feb1a-1c08-4a08-96ca-01553cfa6389 801 0 2026-01-28 00:49:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:646548674d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.3-n-ee3b3e4916 calico-kube-controllers-646548674d-lzmnt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali45734e03e74 [] [] }} ContainerID="01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" Namespace="calico-system" Pod="calico-kube-controllers-646548674d-lzmnt" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-" Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.334 [INFO][4960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" Namespace="calico-system" Pod="calico-kube-controllers-646548674d-lzmnt" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-eth0" Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.362 [INFO][4978] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" HandleID="k8s-pod-network.01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-eth0" Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.363 [INFO][4978] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" HandleID="k8s-pod-network.01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3830), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-n-ee3b3e4916", "pod":"calico-kube-controllers-646548674d-lzmnt", "timestamp":"2026-01-28 00:49:45.362975953 +0000 UTC"}, Hostname:"ci-4459.2.3-n-ee3b3e4916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.363 [INFO][4978] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.386 [INFO][4978] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.387 [INFO][4978] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-ee3b3e4916' Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.457 [INFO][4978] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.470 [INFO][4978] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.484 [INFO][4978] ipam/ipam.go 511: Trying affinity for 192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.487 [INFO][4978] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.489 [INFO][4978] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.490 [INFO][4978] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.501 [INFO][4978] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1 Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.508 [INFO][4978] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.522 [INFO][4978] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.68/26] block=192.168.83.64/26 handle="k8s-pod-network.01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.522 [INFO][4978] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.68/26] handle="k8s-pod-network.01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.522 [INFO][4978] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:49:45.546800 containerd[1882]: 2026-01-28 00:49:45.522 [INFO][4978] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.68/26] IPv6=[] ContainerID="01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" HandleID="k8s-pod-network.01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-eth0" Jan 28 00:49:45.548051 containerd[1882]: 2026-01-28 00:49:45.527 [INFO][4960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" Namespace="calico-system" Pod="calico-kube-controllers-646548674d-lzmnt" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-eth0", GenerateName:"calico-kube-controllers-646548674d-", Namespace:"calico-system", SelfLink:"", UID:"542feb1a-1c08-4a08-96ca-01553cfa6389", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"646548674d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"", Pod:"calico-kube-controllers-646548674d-lzmnt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45734e03e74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:45.548051 containerd[1882]: 2026-01-28 00:49:45.527 [INFO][4960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.68/32] ContainerID="01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" Namespace="calico-system" Pod="calico-kube-controllers-646548674d-lzmnt" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-eth0" Jan 28 00:49:45.548051 containerd[1882]: 2026-01-28 00:49:45.527 [INFO][4960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45734e03e74 ContainerID="01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" Namespace="calico-system" Pod="calico-kube-controllers-646548674d-lzmnt" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-eth0" Jan 28 00:49:45.548051 containerd[1882]: 2026-01-28 00:49:45.532 [INFO][4960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" Namespace="calico-system" Pod="calico-kube-controllers-646548674d-lzmnt" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-eth0" Jan 28 00:49:45.548051 containerd[1882]: 2026-01-28 00:49:45.533 [INFO][4960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" Namespace="calico-system" Pod="calico-kube-controllers-646548674d-lzmnt" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-eth0", GenerateName:"calico-kube-controllers-646548674d-", Namespace:"calico-system", SelfLink:"", UID:"542feb1a-1c08-4a08-96ca-01553cfa6389", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"646548674d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1", Pod:"calico-kube-controllers-646548674d-lzmnt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45734e03e74", MAC:"02:09:36:76:63:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:45.548051 containerd[1882]: 2026-01-28 00:49:45.544 [INFO][4960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" Namespace="calico-system" Pod="calico-kube-controllers-646548674d-lzmnt" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--kube--controllers--646548674d--lzmnt-eth0" Jan 28 00:49:45.580129 containerd[1882]: time="2026-01-28T00:49:45.580091240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gdqnw,Uid:c202d96b-00b7-4bac-b3b9-1cca06ac3857,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96\"" Jan 28 00:49:45.583844 containerd[1882]: time="2026-01-28T00:49:45.583814143Z" level=info msg="CreateContainer within sandbox \"f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:49:45.609700 containerd[1882]: time="2026-01-28T00:49:45.609655482Z" level=info msg="connecting to shim 01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1" address="unix:///run/containerd/s/94419b9d301b4f3e6c76b250797d43f1523631ca6f53ba02d11795829a5499d0" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:49:45.627045 systemd[1]: Started cri-containerd-01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1.scope - libcontainer container 01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1. Jan 28 00:49:45.654034 containerd[1882]: time="2026-01-28T00:49:45.653983557Z" level=info msg="Container bf9e0fef7c0a106fb8832bff21315e957a5cd53a4ed544a79e6bc1bc5c4ff2d5: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:49:45.669170 containerd[1882]: time="2026-01-28T00:49:45.669133153Z" level=info msg="CreateContainer within sandbox \"f6e6a86cb7db79df7815209532c41e91009ed21bb9a259d7555f435eefc6ae96\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf9e0fef7c0a106fb8832bff21315e957a5cd53a4ed544a79e6bc1bc5c4ff2d5\"" Jan 28 00:49:45.670390 containerd[1882]: time="2026-01-28T00:49:45.670352790Z" level=info msg="StartContainer for \"bf9e0fef7c0a106fb8832bff21315e957a5cd53a4ed544a79e6bc1bc5c4ff2d5\"" Jan 28 00:49:45.672054 containerd[1882]: time="2026-01-28T00:49:45.671971142Z" level=info msg="connecting to shim bf9e0fef7c0a106fb8832bff21315e957a5cd53a4ed544a79e6bc1bc5c4ff2d5" address="unix:///run/containerd/s/8da8738ff5d568eccb610b89fad44affece9724999743e70181f00a078a6a25c" protocol=ttrpc version=3 Jan 28 00:49:45.672734 containerd[1882]: time="2026-01-28T00:49:45.672712860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-646548674d-lzmnt,Uid:542feb1a-1c08-4a08-96ca-01553cfa6389,Namespace:calico-system,Attempt:0,} returns sandbox id \"01f25002c23be0b7beef64d0fa5e9bafc25ca82007350a2b31e8f2f8e4ebd3b1\"" Jan 28 00:49:45.674215 containerd[1882]: time="2026-01-28T00:49:45.674187680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:49:45.696131 systemd[1]: Started cri-containerd-bf9e0fef7c0a106fb8832bff21315e957a5cd53a4ed544a79e6bc1bc5c4ff2d5.scope - libcontainer container bf9e0fef7c0a106fb8832bff21315e957a5cd53a4ed544a79e6bc1bc5c4ff2d5. Jan 28 00:49:45.725048 containerd[1882]: time="2026-01-28T00:49:45.725012901Z" level=info msg="StartContainer for \"bf9e0fef7c0a106fb8832bff21315e957a5cd53a4ed544a79e6bc1bc5c4ff2d5\" returns successfully" Jan 28 00:49:45.942011 containerd[1882]: time="2026-01-28T00:49:45.941630573Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:49:45.945643 containerd[1882]: time="2026-01-28T00:49:45.945539762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:49:45.945820 containerd[1882]: time="2026-01-28T00:49:45.945647053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 00:49:45.946009 kubelet[3415]: E0128 00:49:45.945964 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:49:45.946088 kubelet[3415]: E0128 00:49:45.946019 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:49:45.947410 kubelet[3415]: E0128 00:49:45.947361 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4wx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-646548674d-lzmnt_calico-system(542feb1a-1c08-4a08-96ca-01553cfa6389): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:49:45.948857 kubelet[3415]: E0128 00:49:45.948825 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:49:46.056110 systemd-networkd[1473]: cali2b9608e3ab3: Gained IPv6LL Jan 28 00:49:46.273667 containerd[1882]: time="2026-01-28T00:49:46.273626720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b468596cf-lg47w,Uid:77965b9f-fcdf-4986-8206-fcf9912f3435,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:49:46.369289 systemd-networkd[1473]: cali807b9363446: Link UP Jan 28 00:49:46.370021 systemd-networkd[1473]: cali807b9363446: Gained carrier Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.312 [INFO][5138] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-eth0 calico-apiserver-6b468596cf- calico-apiserver 77965b9f-fcdf-4986-8206-fcf9912f3435 798 0 2026-01-28 00:49:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b468596cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.3-n-ee3b3e4916 calico-apiserver-6b468596cf-lg47w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali807b9363446 [] [] }} ContainerID="bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-lg47w" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-" Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.312 [INFO][5138] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-lg47w" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-eth0" Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.330 [INFO][5149] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" HandleID="k8s-pod-network.bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-eth0" Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.330 [INFO][5149] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" HandleID="k8s-pod-network.bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b190), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.3-n-ee3b3e4916", "pod":"calico-apiserver-6b468596cf-lg47w", "timestamp":"2026-01-28 00:49:46.330092326 +0000 UTC"}, Hostname:"ci-4459.2.3-n-ee3b3e4916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.330 [INFO][5149] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.330 [INFO][5149] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.330 [INFO][5149] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-ee3b3e4916' Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.337 [INFO][5149] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.341 [INFO][5149] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.344 [INFO][5149] ipam/ipam.go 511: Trying affinity for 192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.346 [INFO][5149] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.348 [INFO][5149] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.348 [INFO][5149] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.349 [INFO][5149] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063 Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.358 [INFO][5149] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.363 [INFO][5149] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.69/26] block=192.168.83.64/26 handle="k8s-pod-network.bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.364 [INFO][5149] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.69/26] handle="k8s-pod-network.bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.364 [INFO][5149] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:49:46.388623 containerd[1882]: 2026-01-28 00:49:46.364 [INFO][5149] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.69/26] IPv6=[] ContainerID="bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" HandleID="k8s-pod-network.bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-eth0" Jan 28 00:49:46.390055 containerd[1882]: 2026-01-28 00:49:46.366 [INFO][5138] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-lg47w" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-eth0", GenerateName:"calico-apiserver-6b468596cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"77965b9f-fcdf-4986-8206-fcf9912f3435", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b468596cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"", Pod:"calico-apiserver-6b468596cf-lg47w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali807b9363446", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:46.390055 containerd[1882]: 2026-01-28 00:49:46.366 [INFO][5138] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.69/32] ContainerID="bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-lg47w" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-eth0" Jan 28 00:49:46.390055 containerd[1882]: 2026-01-28 00:49:46.366 [INFO][5138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali807b9363446 ContainerID="bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-lg47w" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-eth0" Jan 28 00:49:46.390055 containerd[1882]: 2026-01-28 00:49:46.370 [INFO][5138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-lg47w" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-eth0" Jan 28 00:49:46.390055 containerd[1882]: 2026-01-28 00:49:46.370 [INFO][5138] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-lg47w" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-eth0", GenerateName:"calico-apiserver-6b468596cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"77965b9f-fcdf-4986-8206-fcf9912f3435", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b468596cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063", Pod:"calico-apiserver-6b468596cf-lg47w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali807b9363446", MAC:"9a:12:d5:93:c5:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:46.390055 containerd[1882]: 2026-01-28 00:49:46.386 [INFO][5138] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-lg47w" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--lg47w-eth0" Jan 28 00:49:46.443266 containerd[1882]: time="2026-01-28T00:49:46.442895420Z" level=info msg="connecting to shim bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063" address="unix:///run/containerd/s/58ef2776c043ee7e0c264dafceb756fa1e4b9e8673cca67c10ef9e30da3c3a1e" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:49:46.448959 kubelet[3415]: E0128 00:49:46.447523 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:49:46.470071 systemd[1]: Started cri-containerd-bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063.scope - libcontainer container bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063. Jan 28 00:49:46.497803 kubelet[3415]: I0128 00:49:46.497524 3415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gdqnw" podStartSLOduration=40.497508721 podStartE2EDuration="40.497508721s" podCreationTimestamp="2026-01-28 00:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:49:46.463128768 +0000 UTC m=+47.371441143" watchObservedRunningTime="2026-01-28 00:49:46.497508721 +0000 UTC m=+47.405821088" Jan 28 00:49:46.505057 systemd-networkd[1473]: calia295ec6518e: Gained IPv6LL Jan 28 00:49:46.519708 containerd[1882]: time="2026-01-28T00:49:46.519590773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b468596cf-lg47w,Uid:77965b9f-fcdf-4986-8206-fcf9912f3435,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bd7b22348a61511d358c4795fae4a013e5453862e5d35bd6a9e879c8d59e2063\"" Jan 28 00:49:46.524494 containerd[1882]: time="2026-01-28T00:49:46.524020025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:49:46.774410 containerd[1882]: time="2026-01-28T00:49:46.774344519Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:49:46.783188 containerd[1882]: time="2026-01-28T00:49:46.783087668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:49:46.783188 containerd[1882]: time="2026-01-28T00:49:46.783175926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:49:46.783662 kubelet[3415]: E0128 00:49:46.783626 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:49:46.783816 kubelet[3415]: E0128 00:49:46.783800 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:49:46.784037 kubelet[3415]: E0128 00:49:46.784004 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzxj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b468596cf-lg47w_calico-apiserver(77965b9f-fcdf-4986-8206-fcf9912f3435): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:49:46.786014 kubelet[3415]: E0128 00:49:46.785956 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:49:46.824152 systemd-networkd[1473]: cali45734e03e74: Gained IPv6LL Jan 28 00:49:47.274786 containerd[1882]: time="2026-01-28T00:49:47.274245349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b468596cf-28ns6,Uid:098ac4f0-5200-473e-8687-26a347b0e3eb,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:49:47.274786 containerd[1882]: time="2026-01-28T00:49:47.274399433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzfbp,Uid:bcaf25ee-c8ae-4368-867f-6ea868477814,Namespace:calico-system,Attempt:0,}" Jan 28 00:49:47.274786 containerd[1882]: time="2026-01-28T00:49:47.274550414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-thw89,Uid:f44a3119-d828-410a-8e9a-303c84462c56,Namespace:calico-system,Attempt:0,}" Jan 28 00:49:47.411257 systemd-networkd[1473]: caliaabbcb75156: Link UP Jan 28 00:49:47.412447 systemd-networkd[1473]: caliaabbcb75156: Gained carrier Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.331 [INFO][5223] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-eth0 csi-node-driver- calico-system bcaf25ee-c8ae-4368-867f-6ea868477814 687 0 2026-01-28 00:49:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.3-n-ee3b3e4916 csi-node-driver-mzfbp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaabbcb75156 [] [] }} ContainerID="928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" Namespace="calico-system" Pod="csi-node-driver-mzfbp" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-" Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.331 [INFO][5223] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" Namespace="calico-system" Pod="csi-node-driver-mzfbp" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-eth0" Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.367 [INFO][5247] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" HandleID="k8s-pod-network.928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-eth0" Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.368 [INFO][5247] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" HandleID="k8s-pod-network.928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-n-ee3b3e4916", "pod":"csi-node-driver-mzfbp", "timestamp":"2026-01-28 00:49:47.367813349 +0000 UTC"}, Hostname:"ci-4459.2.3-n-ee3b3e4916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.368 [INFO][5247] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.368 [INFO][5247] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.368 [INFO][5247] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-ee3b3e4916' Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.376 [INFO][5247] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.381 [INFO][5247] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.386 [INFO][5247] ipam/ipam.go 511: Trying affinity for 192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.387 [INFO][5247] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.389 [INFO][5247] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.389 [INFO][5247] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.390 [INFO][5247] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58 Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.395 [INFO][5247] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.405 [INFO][5247] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.70/26] block=192.168.83.64/26 handle="k8s-pod-network.928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.405 [INFO][5247] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.70/26] handle="k8s-pod-network.928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.405 [INFO][5247] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:49:47.430194 containerd[1882]: 2026-01-28 00:49:47.405 [INFO][5247] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.70/26] IPv6=[] ContainerID="928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" HandleID="k8s-pod-network.928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-eth0" Jan 28 00:49:47.430751 containerd[1882]: 2026-01-28 00:49:47.408 [INFO][5223] cni-plugin/k8s.go 418: Populated endpoint ContainerID="928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" Namespace="calico-system" Pod="csi-node-driver-mzfbp" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bcaf25ee-c8ae-4368-867f-6ea868477814", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"", Pod:"csi-node-driver-mzfbp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaabbcb75156", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:47.430751 containerd[1882]: 2026-01-28 00:49:47.408 [INFO][5223] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.70/32] ContainerID="928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" Namespace="calico-system" Pod="csi-node-driver-mzfbp" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-eth0" Jan 28 00:49:47.430751 containerd[1882]: 2026-01-28 00:49:47.408 [INFO][5223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaabbcb75156 ContainerID="928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" Namespace="calico-system" Pod="csi-node-driver-mzfbp" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-eth0" Jan 28 00:49:47.430751 containerd[1882]: 2026-01-28 00:49:47.412 [INFO][5223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" Namespace="calico-system" Pod="csi-node-driver-mzfbp" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-eth0" Jan 28 00:49:47.430751 containerd[1882]: 2026-01-28 00:49:47.413 [INFO][5223] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" Namespace="calico-system" Pod="csi-node-driver-mzfbp" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bcaf25ee-c8ae-4368-867f-6ea868477814", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58", Pod:"csi-node-driver-mzfbp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaabbcb75156", MAC:"3e:70:a0:9c:45:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:47.430751 containerd[1882]: 2026-01-28 00:49:47.427 [INFO][5223] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" Namespace="calico-system" Pod="csi-node-driver-mzfbp" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-csi--node--driver--mzfbp-eth0" Jan 28 00:49:47.449507 kubelet[3415]: E0128 00:49:47.449295 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:49:47.449785 kubelet[3415]: E0128 00:49:47.449738 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:49:47.480031 containerd[1882]: time="2026-01-28T00:49:47.479983944Z" level=info msg="connecting to shim 928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58" address="unix:///run/containerd/s/84ce55dcafca255f533819919764df9ae2313ba343be03b884a4ed341c089e97" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:49:47.517167 systemd[1]: Started cri-containerd-928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58.scope - libcontainer container 928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58. Jan 28 00:49:47.546009 systemd-networkd[1473]: calif288531d6f1: Link UP Jan 28 00:49:47.547949 systemd-networkd[1473]: calif288531d6f1: Gained carrier Jan 28 00:49:47.560159 containerd[1882]: time="2026-01-28T00:49:47.560121584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzfbp,Uid:bcaf25ee-c8ae-4368-867f-6ea868477814,Namespace:calico-system,Attempt:0,} returns sandbox id \"928f7e3c0435fc98ed862c4f66f2e9a91f6c704a43c151651ae6caa2e6edad58\"" Jan 28 00:49:47.561437 containerd[1882]: time="2026-01-28T00:49:47.561376525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.337 [INFO][5212] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-eth0 calico-apiserver-6b468596cf- calico-apiserver 098ac4f0-5200-473e-8687-26a347b0e3eb 804 0 2026-01-28 00:49:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b468596cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.3-n-ee3b3e4916 calico-apiserver-6b468596cf-28ns6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif288531d6f1 [] [] }} ContainerID="298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-28ns6" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-" Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.337 [INFO][5212] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-28ns6" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-eth0" Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.373 [INFO][5252] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" HandleID="k8s-pod-network.298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-eth0" Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.373 [INFO][5252] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" HandleID="k8s-pod-network.298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.3-n-ee3b3e4916", "pod":"calico-apiserver-6b468596cf-28ns6", "timestamp":"2026-01-28 00:49:47.373366667 +0000 UTC"}, Hostname:"ci-4459.2.3-n-ee3b3e4916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.373 [INFO][5252] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.405 [INFO][5252] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.405 [INFO][5252] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-ee3b3e4916' Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.479 [INFO][5252] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.499 [INFO][5252] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.507 [INFO][5252] ipam/ipam.go 511: Trying affinity for 192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.513 [INFO][5252] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.516 [INFO][5252] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.516 [INFO][5252] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.518 [INFO][5252] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545 Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.523 [INFO][5252] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.532 [INFO][5252] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.71/26] block=192.168.83.64/26 handle="k8s-pod-network.298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.532 [INFO][5252] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.71/26] handle="k8s-pod-network.298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.532 [INFO][5252] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:49:47.567624 containerd[1882]: 2026-01-28 00:49:47.532 [INFO][5252] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.71/26] IPv6=[] ContainerID="298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" HandleID="k8s-pod-network.298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-eth0" Jan 28 00:49:47.568144 containerd[1882]: 2026-01-28 00:49:47.536 [INFO][5212] cni-plugin/k8s.go 418: Populated endpoint ContainerID="298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-28ns6" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-eth0", GenerateName:"calico-apiserver-6b468596cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"098ac4f0-5200-473e-8687-26a347b0e3eb", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b468596cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"", Pod:"calico-apiserver-6b468596cf-28ns6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif288531d6f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:47.568144 containerd[1882]: 2026-01-28 00:49:47.536 [INFO][5212] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.71/32] ContainerID="298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-28ns6" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-eth0" Jan 28 00:49:47.568144 containerd[1882]: 2026-01-28 00:49:47.536 [INFO][5212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif288531d6f1 ContainerID="298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-28ns6" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-eth0" Jan 28 00:49:47.568144 containerd[1882]: 2026-01-28 00:49:47.548 [INFO][5212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-28ns6" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-eth0" Jan 28 00:49:47.568144 containerd[1882]: 2026-01-28 00:49:47.548 [INFO][5212] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-28ns6" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-eth0", GenerateName:"calico-apiserver-6b468596cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"098ac4f0-5200-473e-8687-26a347b0e3eb", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b468596cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545", Pod:"calico-apiserver-6b468596cf-28ns6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif288531d6f1", MAC:"ea:d3:80:1c:ee:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:47.568144 containerd[1882]: 2026-01-28 00:49:47.564 [INFO][5212] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" Namespace="calico-apiserver" Pod="calico-apiserver-6b468596cf-28ns6" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-calico--apiserver--6b468596cf--28ns6-eth0" Jan 28 00:49:47.622084 containerd[1882]: time="2026-01-28T00:49:47.621948565Z" level=info msg="connecting to shim 298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545" address="unix:///run/containerd/s/9984e94d8897cd2a444bfa5774cd56614347e780158cb179012cca2e3bc5745a" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:49:47.650301 systemd-networkd[1473]: cali63e08411461: Link UP Jan 28 00:49:47.651841 systemd-networkd[1473]: cali63e08411461: Gained carrier Jan 28 00:49:47.656076 systemd[1]: Started cri-containerd-298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545.scope - libcontainer container 298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545. Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.351 [INFO][5232] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-eth0 goldmane-666569f655- calico-system f44a3119-d828-410a-8e9a-303c84462c56 800 0 2026-01-28 00:49:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.3-n-ee3b3e4916 goldmane-666569f655-thw89 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali63e08411461 [] [] }} ContainerID="12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" Namespace="calico-system" Pod="goldmane-666569f655-thw89" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-" Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.352 [INFO][5232] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" Namespace="calico-system" Pod="goldmane-666569f655-thw89" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-eth0" Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.382 [INFO][5261] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" HandleID="k8s-pod-network.12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-eth0" Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.382 [INFO][5261] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" HandleID="k8s-pod-network.12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-n-ee3b3e4916", "pod":"goldmane-666569f655-thw89", "timestamp":"2026-01-28 00:49:47.382685489 +0000 UTC"}, Hostname:"ci-4459.2.3-n-ee3b3e4916", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.382 [INFO][5261] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.533 [INFO][5261] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.533 [INFO][5261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-n-ee3b3e4916' Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.576 [INFO][5261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.598 [INFO][5261] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.605 [INFO][5261] ipam/ipam.go 511: Trying affinity for 192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.609 [INFO][5261] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.613 [INFO][5261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.64/26 host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.613 [INFO][5261] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.83.64/26 handle="k8s-pod-network.12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.616 [INFO][5261] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.624 [INFO][5261] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.83.64/26 handle="k8s-pod-network.12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.638 [INFO][5261] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.83.72/26] block=192.168.83.64/26 handle="k8s-pod-network.12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.638 [INFO][5261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.72/26] handle="k8s-pod-network.12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" host="ci-4459.2.3-n-ee3b3e4916" Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.638 [INFO][5261] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:49:47.676747 containerd[1882]: 2026-01-28 00:49:47.638 [INFO][5261] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.83.72/26] IPv6=[] ContainerID="12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" HandleID="k8s-pod-network.12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" Workload="ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-eth0" Jan 28 00:49:47.677760 containerd[1882]: 2026-01-28 00:49:47.644 [INFO][5232] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" Namespace="calico-system" Pod="goldmane-666569f655-thw89" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f44a3119-d828-410a-8e9a-303c84462c56", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"", Pod:"goldmane-666569f655-thw89", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.83.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63e08411461", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:47.677760 containerd[1882]: 2026-01-28 00:49:47.644 [INFO][5232] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.72/32] ContainerID="12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" Namespace="calico-system" Pod="goldmane-666569f655-thw89" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-eth0" Jan 28 00:49:47.677760 containerd[1882]: 2026-01-28 00:49:47.644 [INFO][5232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63e08411461 ContainerID="12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" Namespace="calico-system" Pod="goldmane-666569f655-thw89" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-eth0" Jan 28 00:49:47.677760 containerd[1882]: 2026-01-28 00:49:47.654 [INFO][5232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" Namespace="calico-system" Pod="goldmane-666569f655-thw89" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-eth0" Jan 28 00:49:47.677760 containerd[1882]: 2026-01-28 00:49:47.655 [INFO][5232] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" Namespace="calico-system" Pod="goldmane-666569f655-thw89" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"f44a3119-d828-410a-8e9a-303c84462c56", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-n-ee3b3e4916", ContainerID:"12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a", Pod:"goldmane-666569f655-thw89", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.83.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63e08411461", MAC:"66:2f:d6:89:95:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:49:47.677760 containerd[1882]: 2026-01-28 00:49:47.671 [INFO][5232] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" Namespace="calico-system" Pod="goldmane-666569f655-thw89" WorkloadEndpoint="ci--4459.2.3--n--ee3b3e4916-k8s-goldmane--666569f655--thw89-eth0" Jan 28 00:49:47.719564 containerd[1882]: time="2026-01-28T00:49:47.719528733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b468596cf-28ns6,Uid:098ac4f0-5200-473e-8687-26a347b0e3eb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"298a276fafcad2c2ce4d8b8385f97b0423b15e59018e0df67119d68363fd1545\"" Jan 28 00:49:47.731771 containerd[1882]: time="2026-01-28T00:49:47.731732657Z" level=info msg="connecting to shim 12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a" address="unix:///run/containerd/s/e9d9b65c8e248bd3e8067e56c4ef3c25c0a000847b03bde92599e3aba6fc05e9" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:49:47.748046 systemd[1]: Started cri-containerd-12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a.scope - libcontainer container 12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a. Jan 28 00:49:47.780292 containerd[1882]: time="2026-01-28T00:49:47.780246505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-thw89,Uid:f44a3119-d828-410a-8e9a-303c84462c56,Namespace:calico-system,Attempt:0,} returns sandbox id \"12db76d09afc750571dbd6fdf9d7df2ddd7a8e3a9cb654ecfd1374db1e51b62a\"" Jan 28 00:49:47.841832 containerd[1882]: time="2026-01-28T00:49:47.841551038Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:49:47.845710 containerd[1882]: time="2026-01-28T00:49:47.845665593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:49:47.845851 containerd[1882]: time="2026-01-28T00:49:47.845678545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 00:49:47.846043 kubelet[3415]: E0128 00:49:47.846004 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:49:47.846736 kubelet[3415]: E0128 00:49:47.846342 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:49:47.846993 kubelet[3415]: E0128 00:49:47.846562 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t2vd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzfbp_calico-system(bcaf25ee-c8ae-4368-867f-6ea868477814): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:49:47.847261 containerd[1882]: time="2026-01-28T00:49:47.847174670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:49:48.086607 containerd[1882]: time="2026-01-28T00:49:48.086388688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:49:48.095284 containerd[1882]: time="2026-01-28T00:49:48.095096412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:49:48.095284 containerd[1882]: time="2026-01-28T00:49:48.095146398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:49:48.095406 kubelet[3415]: E0128 00:49:48.095294 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:49:48.095406 kubelet[3415]: E0128 00:49:48.095342 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:49:48.096064 kubelet[3415]: E0128 00:49:48.095527 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlsdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b468596cf-28ns6_calico-apiserver(098ac4f0-5200-473e-8687-26a347b0e3eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:49:48.096474 containerd[1882]: time="2026-01-28T00:49:48.096231566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:49:48.097698 kubelet[3415]: E0128 00:49:48.097655 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:49:48.104162 systemd-networkd[1473]: cali807b9363446: Gained IPv6LL Jan 28 00:49:48.395093 containerd[1882]: time="2026-01-28T00:49:48.394962881Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:49:48.399137 containerd[1882]: time="2026-01-28T00:49:48.399042571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:49:48.399137 containerd[1882]: time="2026-01-28T00:49:48.399097860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 00:49:48.399289 kubelet[3415]: E0128 00:49:48.399238 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:49:48.399345 kubelet[3415]: E0128 00:49:48.399285 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:49:48.399570 kubelet[3415]: E0128 00:49:48.399507 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8r8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-thw89_calico-system(f44a3119-d828-410a-8e9a-303c84462c56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:49:48.400029 containerd[1882]: time="2026-01-28T00:49:48.399969814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:49:48.401542 kubelet[3415]: E0128 00:49:48.401501 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:49:48.454235 kubelet[3415]: E0128 00:49:48.454156 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:49:48.457567 kubelet[3415]: E0128 00:49:48.457539 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:49:48.457993 kubelet[3415]: E0128 00:49:48.457939 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:49:48.640595 containerd[1882]: time="2026-01-28T00:49:48.640408933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:49:48.644500 containerd[1882]: time="2026-01-28T00:49:48.644360843Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:49:48.644500 containerd[1882]: time="2026-01-28T00:49:48.644413309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 00:49:48.644942 kubelet[3415]: E0128 00:49:48.644795 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:49:48.645408 kubelet[3415]: E0128 00:49:48.644980 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:49:48.645408 kubelet[3415]: E0128 00:49:48.645101 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t2vd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzfbp_calico-system(bcaf25ee-c8ae-4368-867f-6ea868477814): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:49:48.646309 kubelet[3415]: E0128 00:49:48.646270 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:49:48.744058 systemd-networkd[1473]: calif288531d6f1: Gained IPv6LL Jan 28 00:49:48.936337 systemd-networkd[1473]: caliaabbcb75156: Gained IPv6LL Jan 28 00:49:49.458383 kubelet[3415]: E0128 00:49:49.458342 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:49:49.459409 kubelet[3415]: E0128 00:49:49.458606 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:49:49.459409 kubelet[3415]: E0128 00:49:49.458906 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:49:49.512101 systemd-networkd[1473]: cali63e08411461: Gained IPv6LL Jan 28 00:49:52.275740 containerd[1882]: time="2026-01-28T00:49:52.275606037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:49:52.522951 containerd[1882]: time="2026-01-28T00:49:52.522897385Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:49:52.527135 containerd[1882]: time="2026-01-28T00:49:52.527039504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:49:52.527135 containerd[1882]: time="2026-01-28T00:49:52.527119730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 00:49:52.527599 kubelet[3415]: E0128 00:49:52.527547 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:49:52.528888 kubelet[3415]: E0128 00:49:52.527886 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:49:52.528978 kubelet[3415]: E0128 00:49:52.528046 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:79d9755e231c4919900bab3892802a38,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-th6hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf746948-lbd7r_calico-system(2b821f34-27f6-484c-9dd8-726df28b75d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:49:52.530719 containerd[1882]: time="2026-01-28T00:49:52.530672680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:49:52.796467 containerd[1882]: time="2026-01-28T00:49:52.796198430Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:49:52.801056 containerd[1882]: time="2026-01-28T00:49:52.801011040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:49:52.801138 containerd[1882]: time="2026-01-28T00:49:52.801104954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 00:49:52.801597 kubelet[3415]: E0128 00:49:52.801249 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:49:52.801597 kubelet[3415]: E0128 00:49:52.801306 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:49:52.801597 kubelet[3415]: E0128 00:49:52.801408 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-th6hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf746948-lbd7r_calico-system(2b821f34-27f6-484c-9dd8-726df28b75d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:49:52.802578 kubelet[3415]: E0128 00:49:52.802548 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:49:59.278183 containerd[1882]: time="2026-01-28T00:49:59.278151979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:49:59.529638 containerd[1882]: time="2026-01-28T00:49:59.529476370Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:49:59.533349 containerd[1882]: time="2026-01-28T00:49:59.533251182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:49:59.533349 containerd[1882]: time="2026-01-28T00:49:59.533310232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:49:59.533631 kubelet[3415]: E0128 00:49:59.533583 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:49:59.534163 kubelet[3415]: E0128 00:49:59.533638 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:49:59.534163 kubelet[3415]: E0128 00:49:59.533746 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzxj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b468596cf-lg47w_calico-apiserver(77965b9f-fcdf-4986-8206-fcf9912f3435): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:49:59.535423 kubelet[3415]: E0128 00:49:59.535380 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:50:01.275990 containerd[1882]: time="2026-01-28T00:50:01.275914261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:50:01.533278 containerd[1882]: time="2026-01-28T00:50:01.533017708Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:50:01.537144 containerd[1882]: time="2026-01-28T00:50:01.537019055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:50:01.537144 containerd[1882]: time="2026-01-28T00:50:01.537110874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 00:50:01.537318 kubelet[3415]: E0128 00:50:01.537267 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:50:01.537579 kubelet[3415]: E0128 00:50:01.537323 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:50:01.537579 kubelet[3415]: E0128 00:50:01.537456 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8r8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-thw89_calico-system(f44a3119-d828-410a-8e9a-303c84462c56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:50:01.539430 kubelet[3415]: E0128 00:50:01.539396 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:50:02.275739 containerd[1882]: time="2026-01-28T00:50:02.275433447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:50:02.540317 containerd[1882]: time="2026-01-28T00:50:02.540165175Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:50:02.545058 containerd[1882]: time="2026-01-28T00:50:02.545016811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:50:02.545260 containerd[1882]: time="2026-01-28T00:50:02.545105486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 00:50:02.545292 kubelet[3415]: E0128 00:50:02.545242 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:50:02.545530 kubelet[3415]: E0128 00:50:02.545298 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:50:02.545530 kubelet[3415]: E0128 00:50:02.545504 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t2vd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzfbp_calico-system(bcaf25ee-c8ae-4368-867f-6ea868477814): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:50:02.546191 containerd[1882]: time="2026-01-28T00:50:02.545754738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:50:02.788332 containerd[1882]: time="2026-01-28T00:50:02.788185185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:50:02.791736 containerd[1882]: time="2026-01-28T00:50:02.791405443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 00:50:02.791736 containerd[1882]: time="2026-01-28T00:50:02.791410012Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:50:02.791827 kubelet[3415]: E0128 00:50:02.791625 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:50:02.791827 kubelet[3415]: E0128 00:50:02.791673 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:50:02.792028 kubelet[3415]: E0128 00:50:02.791860 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4wx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-646548674d-lzmnt_calico-system(542feb1a-1c08-4a08-96ca-01553cfa6389): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:50:02.793131 kubelet[3415]: E0128 00:50:02.793103 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:50:02.793216 containerd[1882]: time="2026-01-28T00:50:02.793192186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:50:03.070472 containerd[1882]: time="2026-01-28T00:50:03.070338925Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:50:03.074599 containerd[1882]: time="2026-01-28T00:50:03.074549254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:50:03.074733 containerd[1882]: time="2026-01-28T00:50:03.074636385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 00:50:03.074805 kubelet[3415]: E0128 00:50:03.074756 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:50:03.074855 kubelet[3415]: E0128 00:50:03.074815 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:50:03.074981 kubelet[3415]: E0128 00:50:03.074903 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t2vd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzfbp_calico-system(bcaf25ee-c8ae-4368-867f-6ea868477814): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:50:03.076165 kubelet[3415]: E0128 00:50:03.076122 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:50:03.275315 containerd[1882]: time="2026-01-28T00:50:03.275063708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:50:03.602817 containerd[1882]: time="2026-01-28T00:50:03.602768816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:50:03.607014 containerd[1882]: time="2026-01-28T00:50:03.606957640Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:50:03.607090 containerd[1882]: time="2026-01-28T00:50:03.606990969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:50:03.607249 kubelet[3415]: E0128 00:50:03.607212 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:50:03.607500 kubelet[3415]: E0128 00:50:03.607261 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:50:03.607500 kubelet[3415]: E0128 00:50:03.607383 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlsdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b468596cf-28ns6_calico-apiserver(098ac4f0-5200-473e-8687-26a347b0e3eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:50:03.608597 kubelet[3415]: E0128 00:50:03.608546 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:50:07.277860 kubelet[3415]: E0128 00:50:07.277690 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:50:12.274867 kubelet[3415]: E0128 00:50:12.274821 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:50:14.274267 kubelet[3415]: E0128 00:50:14.274209 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:50:14.278461 kubelet[3415]: E0128 00:50:14.277475 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:50:15.276320 kubelet[3415]: E0128 00:50:15.276057 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:50:17.276528 kubelet[3415]: E0128 00:50:17.276486 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:50:20.276139 containerd[1882]: time="2026-01-28T00:50:20.276088337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:50:20.564847 containerd[1882]: time="2026-01-28T00:50:20.564583252Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:50:20.567875 containerd[1882]: time="2026-01-28T00:50:20.567784453Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:50:20.567875 containerd[1882]: time="2026-01-28T00:50:20.567837855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 00:50:20.568048 kubelet[3415]: E0128 00:50:20.568008 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:50:20.568383 kubelet[3415]: E0128 00:50:20.568057 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:50:20.568383 kubelet[3415]: E0128 00:50:20.568154 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:79d9755e231c4919900bab3892802a38,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-th6hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf746948-lbd7r_calico-system(2b821f34-27f6-484c-9dd8-726df28b75d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:50:20.570649 containerd[1882]: time="2026-01-28T00:50:20.570617051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:50:20.825420 containerd[1882]: time="2026-01-28T00:50:20.823267865Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:50:20.827204 containerd[1882]: time="2026-01-28T00:50:20.827106254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:50:20.827204 containerd[1882]: time="2026-01-28T00:50:20.827138655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 00:50:20.827639 kubelet[3415]: E0128 00:50:20.827440 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:50:20.827639 kubelet[3415]: E0128 00:50:20.827508 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:50:20.827639 kubelet[3415]: E0128 00:50:20.827608 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-th6hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf746948-lbd7r_calico-system(2b821f34-27f6-484c-9dd8-726df28b75d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:50:20.829065 kubelet[3415]: E0128 00:50:20.828927 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:50:25.277614 containerd[1882]: time="2026-01-28T00:50:25.277556699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:50:25.578654 containerd[1882]: time="2026-01-28T00:50:25.578514371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:50:25.582210 containerd[1882]: time="2026-01-28T00:50:25.582160105Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:50:25.582301 containerd[1882]: time="2026-01-28T00:50:25.582258724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 00:50:25.582506 kubelet[3415]: E0128 00:50:25.582455 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:50:25.583223 kubelet[3415]: E0128 00:50:25.582513 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:50:25.583223 kubelet[3415]: E0128 00:50:25.582612 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t2vd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzfbp_calico-system(bcaf25ee-c8ae-4368-867f-6ea868477814): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:50:25.586247 containerd[1882]: time="2026-01-28T00:50:25.585899306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:50:25.859617 containerd[1882]: time="2026-01-28T00:50:25.856577791Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:50:25.867355 containerd[1882]: time="2026-01-28T00:50:25.867260073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:50:25.867355 containerd[1882]: time="2026-01-28T00:50:25.867318251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 00:50:25.867590 kubelet[3415]: E0128 00:50:25.867502 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:50:25.867590 kubelet[3415]: E0128 00:50:25.867556 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:50:25.867701 kubelet[3415]: E0128 00:50:25.867649 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t2vd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzfbp_calico-system(bcaf25ee-c8ae-4368-867f-6ea868477814): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:50:25.869122 kubelet[3415]: E0128 00:50:25.869078 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:50:27.280046 containerd[1882]: time="2026-01-28T00:50:27.279384536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:50:27.567530 containerd[1882]: time="2026-01-28T00:50:27.567281349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:50:27.571523 containerd[1882]: time="2026-01-28T00:50:27.571478380Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:50:27.571587 containerd[1882]: time="2026-01-28T00:50:27.571478892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:50:27.571781 kubelet[3415]: E0128 00:50:27.571704 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:50:27.571781 kubelet[3415]: E0128 00:50:27.571766 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:50:27.572725 kubelet[3415]: E0128 00:50:27.572674 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzxj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b468596cf-lg47w_calico-apiserver(77965b9f-fcdf-4986-8206-fcf9912f3435): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:50:27.573862 kubelet[3415]: E0128 00:50:27.573831 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:50:28.276813 containerd[1882]: time="2026-01-28T00:50:28.276764239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:50:28.556588 containerd[1882]: time="2026-01-28T00:50:28.556456372Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:50:28.561878 containerd[1882]: time="2026-01-28T00:50:28.561270070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:50:28.562107 containerd[1882]: time="2026-01-28T00:50:28.561320647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 00:50:28.562268 kubelet[3415]: E0128 00:50:28.562228 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:50:28.562361 kubelet[3415]: E0128 00:50:28.562347 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:50:28.562868 kubelet[3415]: E0128 00:50:28.562550 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8r8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-thw89_calico-system(f44a3119-d828-410a-8e9a-303c84462c56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:50:28.564700 kubelet[3415]: E0128 00:50:28.564667 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:50:29.275857 containerd[1882]: time="2026-01-28T00:50:29.275786151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:50:29.584554 containerd[1882]: time="2026-01-28T00:50:29.584414207Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:50:29.587921 containerd[1882]: time="2026-01-28T00:50:29.587857783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:50:29.588119 containerd[1882]: time="2026-01-28T00:50:29.587973091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:50:29.588942 kubelet[3415]: E0128 00:50:29.588210 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:50:29.588942 kubelet[3415]: E0128 00:50:29.588281 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:50:29.588942 kubelet[3415]: E0128 00:50:29.588390 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlsdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b468596cf-28ns6_calico-apiserver(098ac4f0-5200-473e-8687-26a347b0e3eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:50:29.590460 kubelet[3415]: E0128 00:50:29.590415 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:50:30.274608 containerd[1882]: time="2026-01-28T00:50:30.274558097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:50:30.518890 containerd[1882]: time="2026-01-28T00:50:30.518702732Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:50:30.522186 containerd[1882]: time="2026-01-28T00:50:30.522080530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:50:30.522186 containerd[1882]: time="2026-01-28T00:50:30.522157629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 00:50:30.522339 kubelet[3415]: E0128 00:50:30.522293 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:50:30.522418 kubelet[3415]: E0128 00:50:30.522341 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:50:30.522788 kubelet[3415]: E0128 00:50:30.522456 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4wx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-646548674d-lzmnt_calico-system(542feb1a-1c08-4a08-96ca-01553cfa6389): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:50:30.524734 kubelet[3415]: E0128 00:50:30.524648 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:50:31.279952 kubelet[3415]: E0128 00:50:31.278700 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:50:38.275267 kubelet[3415]: E0128 00:50:38.275088 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:50:41.278565 kubelet[3415]: E0128 00:50:41.277541 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:50:41.279691 kubelet[3415]: E0128 00:50:41.279074 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:50:43.275867 kubelet[3415]: E0128 00:50:43.275827 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:50:44.276190 kubelet[3415]: E0128 00:50:44.276107 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:50:45.275609 kubelet[3415]: E0128 00:50:45.275271 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:50:51.277100 kubelet[3415]: E0128 00:50:51.277021 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:50:55.275845 kubelet[3415]: E0128 00:50:55.275763 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:50:55.276325 kubelet[3415]: E0128 00:50:55.275871 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:50:57.276701 kubelet[3415]: E0128 00:50:57.276634 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:50:58.275252 kubelet[3415]: E0128 00:50:58.275203 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:50:58.276127 kubelet[3415]: E0128 00:50:58.276093 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:51:04.275711 kubelet[3415]: E0128 00:51:04.275616 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:51:07.274963 kubelet[3415]: E0128 00:51:07.274883 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:51:09.275939 kubelet[3415]: E0128 00:51:09.275376 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:51:10.274961 containerd[1882]: time="2026-01-28T00:51:10.274921515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:51:10.513099 containerd[1882]: time="2026-01-28T00:51:10.513049962Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:51:10.517670 containerd[1882]: time="2026-01-28T00:51:10.517618600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:51:10.517787 containerd[1882]: time="2026-01-28T00:51:10.517704978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:51:10.518006 kubelet[3415]: E0128 00:51:10.517964 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:51:10.518885 kubelet[3415]: E0128 00:51:10.518332 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:51:10.518885 kubelet[3415]: E0128 00:51:10.518549 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzxj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b468596cf-lg47w_calico-apiserver(77965b9f-fcdf-4986-8206-fcf9912f3435): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:51:10.519120 containerd[1882]: time="2026-01-28T00:51:10.518650534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:51:10.520722 kubelet[3415]: E0128 00:51:10.520677 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:51:10.808945 containerd[1882]: time="2026-01-28T00:51:10.808773870Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:51:10.813771 containerd[1882]: time="2026-01-28T00:51:10.813698710Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:51:10.813771 containerd[1882]: time="2026-01-28T00:51:10.813739631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 00:51:10.814821 kubelet[3415]: E0128 00:51:10.814775 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:51:10.814889 kubelet[3415]: E0128 00:51:10.814829 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:51:10.815087 kubelet[3415]: E0128 00:51:10.815045 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8r8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-thw89_calico-system(f44a3119-d828-410a-8e9a-303c84462c56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:51:10.815524 containerd[1882]: time="2026-01-28T00:51:10.815500779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:51:10.816572 kubelet[3415]: E0128 00:51:10.816537 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:51:11.107895 containerd[1882]: time="2026-01-28T00:51:11.107400191Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:51:11.111388 containerd[1882]: time="2026-01-28T00:51:11.111281209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:51:11.111388 containerd[1882]: time="2026-01-28T00:51:11.111332859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 00:51:11.111683 kubelet[3415]: E0128 00:51:11.111644 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:51:11.111796 kubelet[3415]: E0128 00:51:11.111779 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:51:11.112193 kubelet[3415]: E0128 00:51:11.111947 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:79d9755e231c4919900bab3892802a38,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-th6hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf746948-lbd7r_calico-system(2b821f34-27f6-484c-9dd8-726df28b75d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:51:11.115038 containerd[1882]: time="2026-01-28T00:51:11.115012463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:51:11.390294 containerd[1882]: time="2026-01-28T00:51:11.390046419Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:51:11.395133 containerd[1882]: time="2026-01-28T00:51:11.395003748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:51:11.395133 containerd[1882]: time="2026-01-28T00:51:11.395109255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 00:51:11.395442 kubelet[3415]: E0128 00:51:11.395398 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:51:11.395557 kubelet[3415]: E0128 00:51:11.395542 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:51:11.395723 kubelet[3415]: E0128 00:51:11.395696 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-th6hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf746948-lbd7r_calico-system(2b821f34-27f6-484c-9dd8-726df28b75d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:51:11.397045 kubelet[3415]: E0128 00:51:11.396980 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:51:17.275309 containerd[1882]: time="2026-01-28T00:51:17.275229675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:51:17.539806 containerd[1882]: time="2026-01-28T00:51:17.539644079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:51:17.544725 containerd[1882]: time="2026-01-28T00:51:17.543962893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:51:17.545261 containerd[1882]: time="2026-01-28T00:51:17.544009295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 00:51:17.545341 kubelet[3415]: E0128 00:51:17.545058 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:51:17.545341 kubelet[3415]: E0128 00:51:17.545105 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:51:17.545341 kubelet[3415]: E0128 00:51:17.545215 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t2vd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzfbp_calico-system(bcaf25ee-c8ae-4368-867f-6ea868477814): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:51:17.547451 containerd[1882]: time="2026-01-28T00:51:17.547424115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:51:17.814019 containerd[1882]: time="2026-01-28T00:51:17.813053577Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:51:17.816649 containerd[1882]: time="2026-01-28T00:51:17.816535759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:51:17.816649 containerd[1882]: time="2026-01-28T00:51:17.816609914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 00:51:17.816877 kubelet[3415]: E0128 00:51:17.816778 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:51:17.816877 kubelet[3415]: E0128 00:51:17.816826 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:51:17.817149 kubelet[3415]: E0128 00:51:17.816945 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t2vd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzfbp_calico-system(bcaf25ee-c8ae-4368-867f-6ea868477814): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:51:17.818360 kubelet[3415]: E0128 00:51:17.818313 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:51:18.275225 containerd[1882]: time="2026-01-28T00:51:18.275185328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:51:18.536650 containerd[1882]: time="2026-01-28T00:51:18.536527305Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:51:18.541091 containerd[1882]: time="2026-01-28T00:51:18.541036309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:51:18.541502 containerd[1882]: time="2026-01-28T00:51:18.541055462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:51:18.541710 kubelet[3415]: E0128 00:51:18.541663 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:51:18.541876 kubelet[3415]: E0128 00:51:18.541720 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:51:18.541876 kubelet[3415]: E0128 00:51:18.541826 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlsdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b468596cf-28ns6_calico-apiserver(098ac4f0-5200-473e-8687-26a347b0e3eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:51:18.543887 kubelet[3415]: E0128 00:51:18.543844 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:51:21.277980 kubelet[3415]: E0128 00:51:21.277896 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:51:21.720195 systemd[1]: Started sshd@7-10.200.20.30:22-10.200.16.10:55988.service - OpenSSH per-connection server daemon (10.200.16.10:55988). Jan 28 00:51:22.187185 sshd[5592]: Accepted publickey for core from 10.200.16.10 port 55988 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:22.188877 sshd-session[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:22.192856 systemd-logind[1867]: New session 10 of user core. Jan 28 00:51:22.200052 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 00:51:22.603253 sshd[5595]: Connection closed by 10.200.16.10 port 55988 Jan 28 00:51:22.603129 sshd-session[5592]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:22.608071 systemd[1]: sshd@7-10.200.20.30:22-10.200.16.10:55988.service: Deactivated successfully. Jan 28 00:51:22.611036 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 00:51:22.613187 systemd-logind[1867]: Session 10 logged out. Waiting for processes to exit. Jan 28 00:51:22.615332 systemd-logind[1867]: Removed session 10. Jan 28 00:51:23.275692 kubelet[3415]: E0128 00:51:23.275572 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:51:23.276447 containerd[1882]: time="2026-01-28T00:51:23.276206235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:51:23.567493 containerd[1882]: time="2026-01-28T00:51:23.567285064Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:51:23.570896 containerd[1882]: time="2026-01-28T00:51:23.570807313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:51:23.570896 containerd[1882]: time="2026-01-28T00:51:23.570861987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 00:51:23.571226 kubelet[3415]: E0128 00:51:23.571173 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:51:23.571281 kubelet[3415]: E0128 00:51:23.571232 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:51:23.571374 kubelet[3415]: E0128 00:51:23.571336 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4wx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-646548674d-lzmnt_calico-system(542feb1a-1c08-4a08-96ca-01553cfa6389): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:51:23.572787 kubelet[3415]: E0128 00:51:23.572749 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:51:26.277065 kubelet[3415]: E0128 00:51:26.276997 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:51:27.689706 systemd[1]: Started sshd@8-10.200.20.30:22-10.200.16.10:55998.service - OpenSSH per-connection server daemon (10.200.16.10:55998). Jan 28 00:51:28.178595 sshd[5609]: Accepted publickey for core from 10.200.16.10 port 55998 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:28.179783 sshd-session[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:28.183835 systemd-logind[1867]: New session 11 of user core. Jan 28 00:51:28.188064 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 00:51:28.573540 sshd[5612]: Connection closed by 10.200.16.10 port 55998 Jan 28 00:51:28.574175 sshd-session[5609]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:28.577647 systemd[1]: sshd@8-10.200.20.30:22-10.200.16.10:55998.service: Deactivated successfully. Jan 28 00:51:28.580582 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 00:51:28.581536 systemd-logind[1867]: Session 11 logged out. Waiting for processes to exit. Jan 28 00:51:28.583447 systemd-logind[1867]: Removed session 11. Jan 28 00:51:29.277607 kubelet[3415]: E0128 00:51:29.277211 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:51:31.276358 kubelet[3415]: E0128 00:51:31.276253 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:51:33.658315 systemd[1]: Started sshd@9-10.200.20.30:22-10.200.16.10:46034.service - OpenSSH per-connection server daemon (10.200.16.10:46034). Jan 28 00:51:34.117301 sshd[5624]: Accepted publickey for core from 10.200.16.10 port 46034 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:34.120148 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:34.124223 systemd-logind[1867]: New session 12 of user core. Jan 28 00:51:34.130168 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 00:51:34.556190 sshd[5627]: Connection closed by 10.200.16.10 port 46034 Jan 28 00:51:34.556756 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:34.560936 systemd[1]: sshd@9-10.200.20.30:22-10.200.16.10:46034.service: Deactivated successfully. Jan 28 00:51:34.564781 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 00:51:34.566820 systemd-logind[1867]: Session 12 logged out. Waiting for processes to exit. Jan 28 00:51:34.568238 systemd-logind[1867]: Removed session 12. Jan 28 00:51:35.277444 kubelet[3415]: E0128 00:51:35.276990 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:51:38.275610 kubelet[3415]: E0128 00:51:38.275560 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:51:38.276063 kubelet[3415]: E0128 00:51:38.275760 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:51:39.643780 systemd[1]: Started sshd@10-10.200.20.30:22-10.200.16.10:54258.service - OpenSSH per-connection server daemon (10.200.16.10:54258). Jan 28 00:51:40.152904 sshd[5677]: Accepted publickey for core from 10.200.16.10 port 54258 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:40.154870 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:40.163416 systemd-logind[1867]: New session 13 of user core. Jan 28 00:51:40.171307 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 00:51:40.557947 sshd[5680]: Connection closed by 10.200.16.10 port 54258 Jan 28 00:51:40.558778 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:40.563061 systemd-logind[1867]: Session 13 logged out. Waiting for processes to exit. Jan 28 00:51:40.563226 systemd[1]: sshd@10-10.200.20.30:22-10.200.16.10:54258.service: Deactivated successfully. Jan 28 00:51:40.565259 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 00:51:40.570323 systemd-logind[1867]: Removed session 13. Jan 28 00:51:41.276305 kubelet[3415]: E0128 00:51:41.276251 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:51:41.277201 kubelet[3415]: E0128 00:51:41.276327 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:51:42.276742 kubelet[3415]: E0128 00:51:42.276281 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:51:45.643614 systemd[1]: Started sshd@11-10.200.20.30:22-10.200.16.10:54266.service - OpenSSH per-connection server daemon (10.200.16.10:54266). Jan 28 00:51:46.100691 sshd[5693]: Accepted publickey for core from 10.200.16.10 port 54266 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:46.100511 sshd-session[5693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:46.104387 systemd-logind[1867]: New session 14 of user core. Jan 28 00:51:46.112245 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 00:51:46.492155 sshd[5696]: Connection closed by 10.200.16.10 port 54266 Jan 28 00:51:46.492806 sshd-session[5693]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:46.498974 systemd-logind[1867]: Session 14 logged out. Waiting for processes to exit. Jan 28 00:51:46.499201 systemd[1]: sshd@11-10.200.20.30:22-10.200.16.10:54266.service: Deactivated successfully. Jan 28 00:51:46.501883 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 00:51:46.505802 systemd-logind[1867]: Removed session 14. Jan 28 00:51:47.274939 kubelet[3415]: E0128 00:51:47.274438 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:51:51.276313 kubelet[3415]: E0128 00:51:51.275972 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:51:51.277437 kubelet[3415]: E0128 00:51:51.277393 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:51:51.575109 systemd[1]: Started sshd@12-10.200.20.30:22-10.200.16.10:50102.service - OpenSSH per-connection server daemon (10.200.16.10:50102). Jan 28 00:51:52.033374 sshd[5709]: Accepted publickey for core from 10.200.16.10 port 50102 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:52.034997 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:52.038508 systemd-logind[1867]: New session 15 of user core. Jan 28 00:51:52.045037 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 00:51:52.274173 kubelet[3415]: E0128 00:51:52.274135 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:51:52.406304 sshd[5712]: Connection closed by 10.200.16.10 port 50102 Jan 28 00:51:52.407008 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:52.411263 systemd[1]: sshd@12-10.200.20.30:22-10.200.16.10:50102.service: Deactivated successfully. Jan 28 00:51:52.413514 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 00:51:52.414752 systemd-logind[1867]: Session 15 logged out. Waiting for processes to exit. Jan 28 00:51:52.417109 systemd-logind[1867]: Removed session 15. Jan 28 00:51:52.489232 systemd[1]: Started sshd@13-10.200.20.30:22-10.200.16.10:50114.service - OpenSSH per-connection server daemon (10.200.16.10:50114). Jan 28 00:51:52.944949 sshd[5725]: Accepted publickey for core from 10.200.16.10 port 50114 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:52.946531 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:52.951968 systemd-logind[1867]: New session 16 of user core. Jan 28 00:51:52.956786 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 00:51:53.369730 sshd[5728]: Connection closed by 10.200.16.10 port 50114 Jan 28 00:51:53.370545 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:53.375504 systemd-logind[1867]: Session 16 logged out. Waiting for processes to exit. Jan 28 00:51:53.375782 systemd[1]: sshd@13-10.200.20.30:22-10.200.16.10:50114.service: Deactivated successfully. Jan 28 00:51:53.377727 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 00:51:53.379647 systemd-logind[1867]: Removed session 16. Jan 28 00:51:53.451007 systemd[1]: Started sshd@14-10.200.20.30:22-10.200.16.10:50122.service - OpenSSH per-connection server daemon (10.200.16.10:50122). Jan 28 00:51:53.910969 sshd[5738]: Accepted publickey for core from 10.200.16.10 port 50122 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:53.913041 sshd-session[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:53.920829 systemd-logind[1867]: New session 17 of user core. Jan 28 00:51:53.928063 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 00:51:54.291357 sshd[5741]: Connection closed by 10.200.16.10 port 50122 Jan 28 00:51:54.291128 sshd-session[5738]: pam_unix(sshd:session): session closed for user core Jan 28 00:51:54.295538 systemd[1]: sshd@14-10.200.20.30:22-10.200.16.10:50122.service: Deactivated successfully. Jan 28 00:51:54.297392 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 00:51:54.299692 systemd-logind[1867]: Session 17 logged out. Waiting for processes to exit. Jan 28 00:51:54.302524 systemd-logind[1867]: Removed session 17. Jan 28 00:51:55.280026 kubelet[3415]: E0128 00:51:55.279865 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:51:55.280026 kubelet[3415]: E0128 00:51:55.279991 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:51:59.378100 systemd[1]: Started sshd@15-10.200.20.30:22-10.200.16.10:50126.service - OpenSSH per-connection server daemon (10.200.16.10:50126). Jan 28 00:51:59.868730 sshd[5755]: Accepted publickey for core from 10.200.16.10 port 50126 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:51:59.870181 sshd-session[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:51:59.873982 systemd-logind[1867]: New session 18 of user core. Jan 28 00:51:59.881054 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 00:52:00.257226 sshd[5758]: Connection closed by 10.200.16.10 port 50126 Jan 28 00:52:00.257901 sshd-session[5755]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:00.263266 systemd[1]: sshd@15-10.200.20.30:22-10.200.16.10:50126.service: Deactivated successfully. Jan 28 00:52:00.267249 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 00:52:00.268931 systemd-logind[1867]: Session 18 logged out. Waiting for processes to exit. Jan 28 00:52:00.273178 systemd-logind[1867]: Removed session 18. Jan 28 00:52:01.275080 kubelet[3415]: E0128 00:52:01.275027 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:52:03.275540 kubelet[3415]: E0128 00:52:03.274780 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:52:04.274253 kubelet[3415]: E0128 00:52:04.274209 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:52:05.341510 systemd[1]: Started sshd@16-10.200.20.30:22-10.200.16.10:33418.service - OpenSSH per-connection server daemon (10.200.16.10:33418). Jan 28 00:52:05.799426 sshd[5773]: Accepted publickey for core from 10.200.16.10 port 33418 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:05.801484 sshd-session[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:05.809134 systemd-logind[1867]: New session 19 of user core. Jan 28 00:52:05.815448 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 00:52:06.184068 sshd[5776]: Connection closed by 10.200.16.10 port 33418 Jan 28 00:52:06.185172 sshd-session[5773]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:06.188687 systemd[1]: sshd@16-10.200.20.30:22-10.200.16.10:33418.service: Deactivated successfully. Jan 28 00:52:06.190858 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 00:52:06.191687 systemd-logind[1867]: Session 19 logged out. Waiting for processes to exit. Jan 28 00:52:06.193346 systemd-logind[1867]: Removed session 19. Jan 28 00:52:06.275421 kubelet[3415]: E0128 00:52:06.275384 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:52:06.276109 kubelet[3415]: E0128 00:52:06.275550 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:52:08.274622 kubelet[3415]: E0128 00:52:08.274573 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:52:11.276886 systemd[1]: Started sshd@17-10.200.20.30:22-10.200.16.10:44518.service - OpenSSH per-connection server daemon (10.200.16.10:44518). Jan 28 00:52:11.772386 sshd[5814]: Accepted publickey for core from 10.200.16.10 port 44518 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:11.774392 sshd-session[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:11.778104 systemd-logind[1867]: New session 20 of user core. Jan 28 00:52:11.783047 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 00:52:12.173355 sshd[5817]: Connection closed by 10.200.16.10 port 44518 Jan 28 00:52:12.173183 sshd-session[5814]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:12.180544 systemd[1]: sshd@17-10.200.20.30:22-10.200.16.10:44518.service: Deactivated successfully. Jan 28 00:52:12.183676 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 00:52:12.185573 systemd-logind[1867]: Session 20 logged out. Waiting for processes to exit. Jan 28 00:52:12.191148 systemd-logind[1867]: Removed session 20. Jan 28 00:52:15.275861 kubelet[3415]: E0128 00:52:15.275809 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:52:17.258672 systemd[1]: Started sshd@18-10.200.20.30:22-10.200.16.10:44530.service - OpenSSH per-connection server daemon (10.200.16.10:44530). Jan 28 00:52:17.276851 kubelet[3415]: E0128 00:52:17.276278 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:52:17.726456 sshd[5830]: Accepted publickey for core from 10.200.16.10 port 44530 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:17.728149 sshd-session[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:17.732043 systemd-logind[1867]: New session 21 of user core. Jan 28 00:52:17.736072 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 00:52:18.101159 sshd[5833]: Connection closed by 10.200.16.10 port 44530 Jan 28 00:52:18.101537 sshd-session[5830]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:18.106187 systemd-logind[1867]: Session 21 logged out. Waiting for processes to exit. Jan 28 00:52:18.106558 systemd[1]: sshd@18-10.200.20.30:22-10.200.16.10:44530.service: Deactivated successfully. Jan 28 00:52:18.111427 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 00:52:18.114471 systemd-logind[1867]: Removed session 21. Jan 28 00:52:18.274364 kubelet[3415]: E0128 00:52:18.274301 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:52:20.274593 kubelet[3415]: E0128 00:52:20.274548 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:52:21.276094 kubelet[3415]: E0128 00:52:21.275610 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:52:23.191103 systemd[1]: Started sshd@19-10.200.20.30:22-10.200.16.10:41034.service - OpenSSH per-connection server daemon (10.200.16.10:41034). Jan 28 00:52:23.276360 kubelet[3415]: E0128 00:52:23.276299 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:52:23.690062 sshd[5852]: Accepted publickey for core from 10.200.16.10 port 41034 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:23.691663 sshd-session[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:23.695440 systemd-logind[1867]: New session 22 of user core. Jan 28 00:52:23.701055 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 00:52:24.084887 sshd[5855]: Connection closed by 10.200.16.10 port 41034 Jan 28 00:52:24.085486 sshd-session[5852]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:24.089228 systemd-logind[1867]: Session 22 logged out. Waiting for processes to exit. Jan 28 00:52:24.089940 systemd[1]: sshd@19-10.200.20.30:22-10.200.16.10:41034.service: Deactivated successfully. Jan 28 00:52:24.093847 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 00:52:24.095644 systemd-logind[1867]: Removed session 22. Jan 28 00:52:24.175100 systemd[1]: Started sshd@20-10.200.20.30:22-10.200.16.10:41042.service - OpenSSH per-connection server daemon (10.200.16.10:41042). Jan 28 00:52:24.679763 sshd[5866]: Accepted publickey for core from 10.200.16.10 port 41042 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:24.680946 sshd-session[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:24.686965 systemd-logind[1867]: New session 23 of user core. Jan 28 00:52:24.694061 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 00:52:25.214963 sshd[5869]: Connection closed by 10.200.16.10 port 41042 Jan 28 00:52:25.215859 sshd-session[5866]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:25.219324 systemd-logind[1867]: Session 23 logged out. Waiting for processes to exit. Jan 28 00:52:25.219712 systemd[1]: sshd@20-10.200.20.30:22-10.200.16.10:41042.service: Deactivated successfully. Jan 28 00:52:25.222969 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 00:52:25.226247 systemd-logind[1867]: Removed session 23. Jan 28 00:52:25.304290 systemd[1]: Started sshd@21-10.200.20.30:22-10.200.16.10:41052.service - OpenSSH per-connection server daemon (10.200.16.10:41052). Jan 28 00:52:25.768817 sshd[5880]: Accepted publickey for core from 10.200.16.10 port 41052 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:25.769987 sshd-session[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:25.773928 systemd-logind[1867]: New session 24 of user core. Jan 28 00:52:25.778049 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 00:52:26.799052 sshd[5883]: Connection closed by 10.200.16.10 port 41052 Jan 28 00:52:26.799391 sshd-session[5880]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:26.803427 systemd-logind[1867]: Session 24 logged out. Waiting for processes to exit. Jan 28 00:52:26.803543 systemd[1]: sshd@21-10.200.20.30:22-10.200.16.10:41052.service: Deactivated successfully. Jan 28 00:52:26.807630 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 00:52:26.810679 systemd-logind[1867]: Removed session 24. Jan 28 00:52:26.882276 systemd[1]: Started sshd@22-10.200.20.30:22-10.200.16.10:41062.service - OpenSSH per-connection server daemon (10.200.16.10:41062). Jan 28 00:52:27.344092 sshd[5908]: Accepted publickey for core from 10.200.16.10 port 41062 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:27.345255 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:27.350165 systemd-logind[1867]: New session 25 of user core. Jan 28 00:52:27.357030 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 00:52:27.803219 sshd[5911]: Connection closed by 10.200.16.10 port 41062 Jan 28 00:52:27.803999 sshd-session[5908]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:27.808409 systemd[1]: sshd@22-10.200.20.30:22-10.200.16.10:41062.service: Deactivated successfully. Jan 28 00:52:27.809865 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 00:52:27.813323 systemd-logind[1867]: Session 25 logged out. Waiting for processes to exit. Jan 28 00:52:27.815070 systemd-logind[1867]: Removed session 25. Jan 28 00:52:27.895121 systemd[1]: Started sshd@23-10.200.20.30:22-10.200.16.10:41070.service - OpenSSH per-connection server daemon (10.200.16.10:41070). Jan 28 00:52:28.274112 kubelet[3415]: E0128 00:52:28.274069 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:52:28.359425 sshd[5924]: Accepted publickey for core from 10.200.16.10 port 41070 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:28.360575 sshd-session[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:28.368503 systemd-logind[1867]: New session 26 of user core. Jan 28 00:52:28.371066 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 00:52:28.729879 sshd[5927]: Connection closed by 10.200.16.10 port 41070 Jan 28 00:52:28.730424 sshd-session[5924]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:28.733367 systemd-logind[1867]: Session 26 logged out. Waiting for processes to exit. Jan 28 00:52:28.733483 systemd[1]: sshd@23-10.200.20.30:22-10.200.16.10:41070.service: Deactivated successfully. Jan 28 00:52:28.735190 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 00:52:28.738590 systemd-logind[1867]: Removed session 26. Jan 28 00:52:29.277831 kubelet[3415]: E0128 00:52:29.277797 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:52:29.279365 kubelet[3415]: E0128 00:52:29.279074 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:52:33.826409 systemd[1]: Started sshd@24-10.200.20.30:22-10.200.16.10:49336.service - OpenSSH per-connection server daemon (10.200.16.10:49336). Jan 28 00:52:34.275530 containerd[1882]: time="2026-01-28T00:52:34.275483834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:52:34.276901 kubelet[3415]: E0128 00:52:34.276632 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:52:34.326438 sshd[5940]: Accepted publickey for core from 10.200.16.10 port 49336 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:34.327371 sshd-session[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:34.336038 systemd-logind[1867]: New session 27 of user core. Jan 28 00:52:34.344085 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 00:52:34.570012 containerd[1882]: time="2026-01-28T00:52:34.569089465Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:52:34.573021 containerd[1882]: time="2026-01-28T00:52:34.572970514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:52:34.573189 containerd[1882]: time="2026-01-28T00:52:34.573055388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 00:52:34.573376 kubelet[3415]: E0128 00:52:34.573304 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:52:34.573376 kubelet[3415]: E0128 00:52:34.573356 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:52:34.573803 kubelet[3415]: E0128 00:52:34.573643 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:79d9755e231c4919900bab3892802a38,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-th6hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf746948-lbd7r_calico-system(2b821f34-27f6-484c-9dd8-726df28b75d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:52:34.574087 containerd[1882]: time="2026-01-28T00:52:34.574063524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:52:34.751028 sshd[5945]: Connection closed by 10.200.16.10 port 49336 Jan 28 00:52:34.752121 sshd-session[5940]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:34.756896 systemd-logind[1867]: Session 27 logged out. Waiting for processes to exit. Jan 28 00:52:34.757547 systemd[1]: sshd@24-10.200.20.30:22-10.200.16.10:49336.service: Deactivated successfully. Jan 28 00:52:34.762179 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 00:52:34.765983 systemd-logind[1867]: Removed session 27. Jan 28 00:52:34.829943 containerd[1882]: time="2026-01-28T00:52:34.829305815Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:52:34.835635 containerd[1882]: time="2026-01-28T00:52:34.835408605Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:52:34.835635 containerd[1882]: time="2026-01-28T00:52:34.835461047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:52:34.836179 kubelet[3415]: E0128 00:52:34.835991 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:52:34.836179 kubelet[3415]: E0128 00:52:34.836061 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:52:34.836444 kubelet[3415]: E0128 00:52:34.836253 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pzxj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b468596cf-lg47w_calico-apiserver(77965b9f-fcdf-4986-8206-fcf9912f3435): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:52:34.837141 containerd[1882]: time="2026-01-28T00:52:34.837106994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:52:34.837777 kubelet[3415]: E0128 00:52:34.837751 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:52:35.092243 containerd[1882]: time="2026-01-28T00:52:35.092087109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:52:35.096996 containerd[1882]: time="2026-01-28T00:52:35.096758318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:52:35.096996 containerd[1882]: time="2026-01-28T00:52:35.096814280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 00:52:35.099083 kubelet[3415]: E0128 00:52:35.099018 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:52:35.099176 kubelet[3415]: E0128 00:52:35.099090 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:52:35.099616 kubelet[3415]: E0128 00:52:35.099379 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-th6hl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5fcf746948-lbd7r_calico-system(2b821f34-27f6-484c-9dd8-726df28b75d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:52:35.100886 kubelet[3415]: E0128 00:52:35.100840 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:52:39.833450 systemd[1]: Started sshd@25-10.200.20.30:22-10.200.16.10:38664.service - OpenSSH per-connection server daemon (10.200.16.10:38664). Jan 28 00:52:40.289814 sshd[5982]: Accepted publickey for core from 10.200.16.10 port 38664 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:40.291123 sshd-session[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:40.294780 systemd-logind[1867]: New session 28 of user core. Jan 28 00:52:40.303091 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 00:52:40.666381 sshd[5985]: Connection closed by 10.200.16.10 port 38664 Jan 28 00:52:40.667082 sshd-session[5982]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:40.671830 systemd-logind[1867]: Session 28 logged out. Waiting for processes to exit. Jan 28 00:52:40.672495 systemd[1]: sshd@25-10.200.20.30:22-10.200.16.10:38664.service: Deactivated successfully. Jan 28 00:52:40.677682 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 00:52:40.682817 systemd-logind[1867]: Removed session 28. Jan 28 00:52:42.277216 containerd[1882]: time="2026-01-28T00:52:42.277097632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:52:42.517893 containerd[1882]: time="2026-01-28T00:52:42.517838507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:52:42.523976 containerd[1882]: time="2026-01-28T00:52:42.523896157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:52:42.524145 containerd[1882]: time="2026-01-28T00:52:42.523930246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 00:52:42.524547 kubelet[3415]: E0128 00:52:42.524293 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:52:42.524547 kubelet[3415]: E0128 00:52:42.524348 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:52:42.525876 kubelet[3415]: E0128 00:52:42.525538 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t8r8t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-thw89_calico-system(f44a3119-d828-410a-8e9a-303c84462c56): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:52:42.527134 kubelet[3415]: E0128 00:52:42.527095 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:52:43.275704 containerd[1882]: time="2026-01-28T00:52:43.274732225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:52:43.550708 containerd[1882]: time="2026-01-28T00:52:43.550565654Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:52:43.557104 containerd[1882]: time="2026-01-28T00:52:43.557048307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:52:43.557190 containerd[1882]: time="2026-01-28T00:52:43.557154807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 00:52:43.557389 kubelet[3415]: E0128 00:52:43.557344 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:52:43.557631 kubelet[3415]: E0128 00:52:43.557398 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:52:43.557631 kubelet[3415]: E0128 00:52:43.557534 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tlsdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6b468596cf-28ns6_calico-apiserver(098ac4f0-5200-473e-8687-26a347b0e3eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:52:43.558946 kubelet[3415]: E0128 00:52:43.558825 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:52:44.274481 containerd[1882]: time="2026-01-28T00:52:44.274409233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:52:44.549987 containerd[1882]: time="2026-01-28T00:52:44.549680604Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:52:44.553698 containerd[1882]: time="2026-01-28T00:52:44.553610460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 00:52:44.553698 containerd[1882]: time="2026-01-28T00:52:44.553651381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:52:44.555039 kubelet[3415]: E0128 00:52:44.554648 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:52:44.555039 kubelet[3415]: E0128 00:52:44.554710 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:52:44.555039 kubelet[3415]: E0128 00:52:44.554814 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4wx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-646548674d-lzmnt_calico-system(542feb1a-1c08-4a08-96ca-01553cfa6389): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:52:44.556346 kubelet[3415]: E0128 00:52:44.556314 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:52:45.760127 systemd[1]: Started sshd@26-10.200.20.30:22-10.200.16.10:38678.service - OpenSSH per-connection server daemon (10.200.16.10:38678). Jan 28 00:52:46.265960 sshd[6005]: Accepted publickey for core from 10.200.16.10 port 38678 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:46.267463 sshd-session[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:46.275046 systemd-logind[1867]: New session 29 of user core. Jan 28 00:52:46.277228 kubelet[3415]: E0128 00:52:46.277148 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:52:46.278075 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 00:52:46.661951 sshd[6008]: Connection closed by 10.200.16.10 port 38678 Jan 28 00:52:46.662641 sshd-session[6005]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:46.668994 systemd[1]: sshd@26-10.200.20.30:22-10.200.16.10:38678.service: Deactivated successfully. Jan 28 00:52:46.672671 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 00:52:46.673871 systemd-logind[1867]: Session 29 logged out. Waiting for processes to exit. Jan 28 00:52:46.675988 systemd-logind[1867]: Removed session 29. Jan 28 00:52:47.280202 kubelet[3415]: E0128 00:52:47.280154 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:52:49.277343 containerd[1882]: time="2026-01-28T00:52:49.277276063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:52:49.515786 containerd[1882]: time="2026-01-28T00:52:49.515586002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:52:49.519763 containerd[1882]: time="2026-01-28T00:52:49.519679097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:52:49.519763 containerd[1882]: time="2026-01-28T00:52:49.519723987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 00:52:49.520079 kubelet[3415]: E0128 00:52:49.520028 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:52:49.520412 kubelet[3415]: E0128 00:52:49.520087 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:52:49.520412 kubelet[3415]: E0128 00:52:49.520190 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t2vd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzfbp_calico-system(bcaf25ee-c8ae-4368-867f-6ea868477814): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:52:49.522152 containerd[1882]: time="2026-01-28T00:52:49.522125870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:52:49.804400 containerd[1882]: time="2026-01-28T00:52:49.804312885Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:52:49.808986 containerd[1882]: time="2026-01-28T00:52:49.808936749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:52:49.809171 containerd[1882]: time="2026-01-28T00:52:49.809023096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 00:52:49.809217 kubelet[3415]: E0128 00:52:49.809148 3415 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:52:49.809217 kubelet[3415]: E0128 00:52:49.809191 3415 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:52:49.809322 kubelet[3415]: E0128 00:52:49.809297 3415 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t2vd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzfbp_calico-system(bcaf25ee-c8ae-4368-867f-6ea868477814): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:52:49.810630 kubelet[3415]: E0128 00:52:49.810600 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:52:51.751321 systemd[1]: Started sshd@27-10.200.20.30:22-10.200.16.10:52246.service - OpenSSH per-connection server daemon (10.200.16.10:52246). Jan 28 00:52:52.208987 sshd[6022]: Accepted publickey for core from 10.200.16.10 port 52246 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:52.210190 sshd-session[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:52.216833 systemd-logind[1867]: New session 30 of user core. Jan 28 00:52:52.220129 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 28 00:52:52.585191 sshd[6025]: Connection closed by 10.200.16.10 port 52246 Jan 28 00:52:52.586744 sshd-session[6022]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:52.589818 systemd[1]: sshd@27-10.200.20.30:22-10.200.16.10:52246.service: Deactivated successfully. Jan 28 00:52:52.591789 systemd[1]: session-30.scope: Deactivated successfully. Jan 28 00:52:52.594498 systemd-logind[1867]: Session 30 logged out. Waiting for processes to exit. Jan 28 00:52:52.595881 systemd-logind[1867]: Removed session 30. Jan 28 00:52:54.274986 kubelet[3415]: E0128 00:52:54.274852 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:52:55.276969 kubelet[3415]: E0128 00:52:55.276925 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:52:57.275289 kubelet[3415]: E0128 00:52:57.275221 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:52:57.668705 systemd[1]: Started sshd@28-10.200.20.30:22-10.200.16.10:52248.service - OpenSSH per-connection server daemon (10.200.16.10:52248). Jan 28 00:52:58.126938 sshd[6050]: Accepted publickey for core from 10.200.16.10 port 52248 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:52:58.127790 sshd-session[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:52:58.131852 systemd-logind[1867]: New session 31 of user core. Jan 28 00:52:58.141306 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 28 00:52:58.504936 sshd[6053]: Connection closed by 10.200.16.10 port 52248 Jan 28 00:52:58.505493 sshd-session[6050]: pam_unix(sshd:session): session closed for user core Jan 28 00:52:58.509371 systemd[1]: sshd@28-10.200.20.30:22-10.200.16.10:52248.service: Deactivated successfully. Jan 28 00:52:58.512475 systemd[1]: session-31.scope: Deactivated successfully. Jan 28 00:52:58.514455 systemd-logind[1867]: Session 31 logged out. Waiting for processes to exit. Jan 28 00:52:58.516240 systemd-logind[1867]: Removed session 31. Jan 28 00:52:59.277705 kubelet[3415]: E0128 00:52:59.277193 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:53:01.277456 kubelet[3415]: E0128 00:53:01.277054 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:53:02.275950 kubelet[3415]: E0128 00:53:02.275377 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:53:03.594967 systemd[1]: Started sshd@29-10.200.20.30:22-10.200.16.10:43592.service - OpenSSH per-connection server daemon (10.200.16.10:43592). Jan 28 00:53:04.091420 sshd[6071]: Accepted publickey for core from 10.200.16.10 port 43592 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:04.092620 sshd-session[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:04.097368 systemd-logind[1867]: New session 32 of user core. Jan 28 00:53:04.105061 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 28 00:53:04.477641 sshd[6074]: Connection closed by 10.200.16.10 port 43592 Jan 28 00:53:04.478331 sshd-session[6071]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:04.481995 systemd[1]: sshd@29-10.200.20.30:22-10.200.16.10:43592.service: Deactivated successfully. Jan 28 00:53:04.483846 systemd[1]: session-32.scope: Deactivated successfully. Jan 28 00:53:04.484632 systemd-logind[1867]: Session 32 logged out. Waiting for processes to exit. Jan 28 00:53:04.485731 systemd-logind[1867]: Removed session 32. Jan 28 00:53:06.275044 kubelet[3415]: E0128 00:53:06.274850 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-28ns6" podUID="098ac4f0-5200-473e-8687-26a347b0e3eb" Jan 28 00:53:08.275653 kubelet[3415]: E0128 00:53:08.275319 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-646548674d-lzmnt" podUID="542feb1a-1c08-4a08-96ca-01553cfa6389" Jan 28 00:53:09.569142 systemd[1]: Started sshd@30-10.200.20.30:22-10.200.16.10:51624.service - OpenSSH per-connection server daemon (10.200.16.10:51624). Jan 28 00:53:10.067400 sshd[6115]: Accepted publickey for core from 10.200.16.10 port 51624 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:10.068799 sshd-session[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:10.075515 systemd-logind[1867]: New session 33 of user core. Jan 28 00:53:10.080099 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 28 00:53:10.454660 sshd[6118]: Connection closed by 10.200.16.10 port 51624 Jan 28 00:53:10.455364 sshd-session[6115]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:10.459203 systemd-logind[1867]: Session 33 logged out. Waiting for processes to exit. Jan 28 00:53:10.459830 systemd[1]: sshd@30-10.200.20.30:22-10.200.16.10:51624.service: Deactivated successfully. Jan 28 00:53:10.462546 systemd[1]: session-33.scope: Deactivated successfully. Jan 28 00:53:10.465799 systemd-logind[1867]: Removed session 33. Jan 28 00:53:11.276418 kubelet[3415]: E0128 00:53:11.276325 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-thw89" podUID="f44a3119-d828-410a-8e9a-303c84462c56" Jan 28 00:53:12.274456 kubelet[3415]: E0128 00:53:12.274401 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b468596cf-lg47w" podUID="77965b9f-fcdf-4986-8206-fcf9912f3435" Jan 28 00:53:12.276017 kubelet[3415]: E0128 00:53:12.275983 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5fcf746948-lbd7r" podUID="2b821f34-27f6-484c-9dd8-726df28b75d8" Jan 28 00:53:15.275936 kubelet[3415]: E0128 00:53:15.275869 3415 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzfbp" podUID="bcaf25ee-c8ae-4368-867f-6ea868477814" Jan 28 00:53:15.538516 systemd[1]: Started sshd@31-10.200.20.30:22-10.200.16.10:51630.service - OpenSSH per-connection server daemon (10.200.16.10:51630). Jan 28 00:53:16.002651 sshd[6129]: Accepted publickey for core from 10.200.16.10 port 51630 ssh2: RSA SHA256:28WgsPGsk+sBElg7uwN9F5Iud3JMcThNsd0JCAQeNzU Jan 28 00:53:16.005328 sshd-session[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:53:16.011609 systemd-logind[1867]: New session 34 of user core. Jan 28 00:53:16.019087 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 28 00:53:16.406653 sshd[6132]: Connection closed by 10.200.16.10 port 51630 Jan 28 00:53:16.407521 sshd-session[6129]: pam_unix(sshd:session): session closed for user core Jan 28 00:53:16.412928 systemd-logind[1867]: Session 34 logged out. Waiting for processes to exit. Jan 28 00:53:16.414607 systemd[1]: sshd@31-10.200.20.30:22-10.200.16.10:51630.service: Deactivated successfully. Jan 28 00:53:16.418466 systemd[1]: session-34.scope: Deactivated successfully. Jan 28 00:53:16.420706 systemd-logind[1867]: Removed session 34.