Dec 16 12:28:32.117118 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Dec 16 12:28:32.117135 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:28:32.117141 kernel: KASLR enabled Dec 16 12:28:32.117145 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 16 12:28:32.117149 kernel: printk: legacy bootconsole [pl11] enabled Dec 16 12:28:32.117153 kernel: efi: EFI v2.7 by EDK II Dec 16 12:28:32.117158 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Dec 16 12:28:32.117162 kernel: random: crng init done Dec 16 12:28:32.117167 kernel: secureboot: Secure boot disabled Dec 16 12:28:32.117171 kernel: ACPI: Early table checksum verification disabled Dec 16 12:28:32.117175 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Dec 16 12:28:32.117179 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.117183 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.117187 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 12:28:32.117193 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.117197 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.117202 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.117206 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.117210 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.117215 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.117219 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 16 12:28:32.117223 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:32.117227 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 16 12:28:32.117231 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:28:32.117236 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 16 12:28:32.117240 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Dec 16 12:28:32.117244 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Dec 16 12:28:32.117248 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 16 12:28:32.117252 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 16 12:28:32.117256 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 16 12:28:32.117261 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 16 12:28:32.117266 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 16 12:28:32.117270 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 16 12:28:32.117274 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 16 12:28:32.117278 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 16 12:28:32.117282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 16 12:28:32.117286 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Dec 16 12:28:32.117290 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Dec 16 12:28:32.117295 kernel: Zone ranges: Dec 16 12:28:32.117299 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 16 12:28:32.117306 kernel: DMA32 empty Dec 16 12:28:32.117310 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:28:32.117315 kernel: Device empty Dec 16 12:28:32.117319 kernel: Movable zone start for each node Dec 16 12:28:32.117323 kernel: Early memory node ranges Dec 16 12:28:32.117328 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 16 12:28:32.117333 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Dec 16 12:28:32.117337 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Dec 16 12:28:32.117342 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Dec 16 12:28:32.117346 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Dec 16 12:28:32.117350 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Dec 16 12:28:32.117355 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:28:32.117359 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 16 12:28:32.117363 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 16 12:28:32.117368 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Dec 16 12:28:32.117372 kernel: psci: probing for conduit method from ACPI. Dec 16 12:28:32.117377 kernel: psci: PSCIv1.3 detected in firmware. Dec 16 12:28:32.117381 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:28:32.117386 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 16 12:28:32.117390 kernel: psci: SMC Calling Convention v1.4 Dec 16 12:28:32.117403 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 16 12:28:32.117414 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 16 12:28:32.117418 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:28:32.117423 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:28:32.117427 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:28:32.117434 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:28:32.117438 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Dec 16 12:28:32.117443 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:28:32.117447 kernel: CPU features: detected: Spectre-v4 Dec 16 12:28:32.117452 kernel: CPU features: detected: Spectre-BHB Dec 16 12:28:32.117457 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:28:32.117462 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:28:32.117466 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Dec 16 12:28:32.117478 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:28:32.117492 kernel: alternatives: applying boot alternatives Dec 16 12:28:32.117498 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:28:32.117503 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:28:32.117507 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:28:32.117511 kernel: Fallback order for Node 0: 0 Dec 16 12:28:32.117516 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Dec 16 12:28:32.117523 kernel: Policy zone: Normal Dec 16 12:28:32.117527 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:28:32.117531 kernel: software IO TLB: area num 2. Dec 16 12:28:32.117536 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Dec 16 12:28:32.117540 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:28:32.117544 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:28:32.117550 kernel: rcu: RCU event tracing is enabled. Dec 16 12:28:32.117554 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:28:32.117559 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:28:32.117563 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:28:32.117568 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:28:32.117572 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:28:32.117577 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:28:32.117582 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:28:32.117586 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:28:32.117590 kernel: GICv3: 960 SPIs implemented Dec 16 12:28:32.117595 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:28:32.117599 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:28:32.117603 kernel: GICv3: GICv3 features: 16 PPIs, RSS Dec 16 12:28:32.117608 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Dec 16 12:28:32.117612 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 16 12:28:32.117616 kernel: ITS: No ITS available, not enabling LPIs Dec 16 12:28:32.117621 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:28:32.117626 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Dec 16 12:28:32.117631 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:28:32.117635 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Dec 16 12:28:32.117640 kernel: Console: colour dummy device 80x25 Dec 16 12:28:32.117645 kernel: printk: legacy console [tty1] enabled Dec 16 12:28:32.117649 kernel: ACPI: Core revision 20240827 Dec 16 12:28:32.117654 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Dec 16 12:28:32.117658 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:28:32.117663 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:28:32.117667 kernel: landlock: Up and running. Dec 16 12:28:32.117673 kernel: SELinux: Initializing. Dec 16 12:28:32.117677 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:28:32.117682 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:28:32.117686 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Dec 16 12:28:32.117691 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Dec 16 12:28:32.117699 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 12:28:32.117704 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:28:32.117709 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:28:32.117714 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:28:32.117719 kernel: Remapping and enabling EFI services. Dec 16 12:28:32.117723 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:28:32.117728 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:28:32.117734 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 16 12:28:32.117739 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Dec 16 12:28:32.117743 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:28:32.117748 kernel: SMP: Total of 2 processors activated. Dec 16 12:28:32.117753 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:28:32.117758 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:28:32.117763 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 16 12:28:32.117768 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:28:32.117773 kernel: CPU features: detected: Common not Private translations Dec 16 12:28:32.117777 kernel: CPU features: detected: CRC32 instructions Dec 16 12:28:32.117782 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Dec 16 12:28:32.117787 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:28:32.117792 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:28:32.117796 kernel: CPU features: detected: Privileged Access Never Dec 16 12:28:32.117802 kernel: CPU features: detected: Speculation barrier (SB) Dec 16 12:28:32.117807 kernel: CPU features: detected: TLB range maintenance instructions Dec 16 12:28:32.117812 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:28:32.117816 kernel: CPU features: detected: Scalable Vector Extension Dec 16 12:28:32.117821 kernel: alternatives: applying system-wide alternatives Dec 16 12:28:32.117826 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 16 12:28:32.117831 kernel: SVE: maximum available vector length 16 bytes per vector Dec 16 12:28:32.117835 kernel: SVE: default vector length 16 bytes per vector Dec 16 12:28:32.117840 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Dec 16 12:28:32.117846 kernel: devtmpfs: initialized Dec 16 12:28:32.117851 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:28:32.117856 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:28:32.117860 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:28:32.117865 kernel: 0 pages in range for non-PLT usage Dec 16 12:28:32.117870 kernel: 508400 pages in range for PLT usage Dec 16 12:28:32.117875 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:28:32.117879 kernel: SMBIOS 3.1.0 present. Dec 16 12:28:32.117885 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Dec 16 12:28:32.117890 kernel: DMI: Memory slots populated: 2/2 Dec 16 12:28:32.117894 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:28:32.117899 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:28:32.117904 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:28:32.117909 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:28:32.117913 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:28:32.117918 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Dec 16 12:28:32.117923 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:28:32.117929 kernel: cpuidle: using governor menu Dec 16 12:28:32.117933 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:28:32.117938 kernel: ASID allocator initialised with 32768 entries Dec 16 12:28:32.117943 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:28:32.117948 kernel: Serial: AMBA PL011 UART driver Dec 16 12:28:32.117952 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:28:32.117957 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:28:32.117962 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:28:32.117966 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:28:32.117972 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:28:32.117977 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:28:32.117981 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:28:32.117986 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:28:32.117991 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:28:32.117996 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:28:32.118000 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:28:32.118005 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:28:32.118010 kernel: ACPI: Interpreter enabled Dec 16 12:28:32.118015 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:28:32.118020 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:28:32.118025 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:28:32.118030 kernel: printk: legacy bootconsole [pl11] disabled Dec 16 12:28:32.118035 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 16 12:28:32.118039 kernel: ACPI: CPU0 has been hot-added Dec 16 12:28:32.118044 kernel: ACPI: CPU1 has been hot-added Dec 16 12:28:32.118049 kernel: iommu: Default domain type: Translated Dec 16 12:28:32.118054 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:28:32.118059 kernel: efivars: Registered efivars operations Dec 16 12:28:32.118064 kernel: vgaarb: loaded Dec 16 12:28:32.118069 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:28:32.118073 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:28:32.118078 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:28:32.118083 kernel: pnp: PnP ACPI init Dec 16 12:28:32.118087 kernel: pnp: PnP ACPI: found 0 devices Dec 16 12:28:32.118092 kernel: NET: Registered PF_INET protocol family Dec 16 12:28:32.118097 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:28:32.118102 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:28:32.118107 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:28:32.118112 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:28:32.118117 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:28:32.118122 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:28:32.118127 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:28:32.118131 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:28:32.118136 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:28:32.118141 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:28:32.118146 kernel: kvm [1]: HYP mode not available Dec 16 12:28:32.118151 kernel: Initialise system trusted keyrings Dec 16 12:28:32.118156 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:28:32.118161 kernel: Key type asymmetric registered Dec 16 12:28:32.118165 kernel: Asymmetric key parser 'x509' registered Dec 16 12:28:32.118170 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:28:32.118175 kernel: io scheduler mq-deadline registered Dec 16 12:28:32.118180 kernel: io scheduler kyber registered Dec 16 12:28:32.118185 kernel: io scheduler bfq registered Dec 16 12:28:32.118189 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:28:32.118195 kernel: thunder_xcv, ver 1.0 Dec 16 12:28:32.118200 kernel: thunder_bgx, ver 1.0 Dec 16 12:28:32.118204 kernel: nicpf, ver 1.0 Dec 16 12:28:32.118209 kernel: nicvf, ver 1.0 Dec 16 12:28:32.118318 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:28:32.118367 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:28:31 UTC (1765888111) Dec 16 12:28:32.118374 kernel: efifb: probing for efifb Dec 16 12:28:32.118380 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 12:28:32.118385 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 12:28:32.118389 kernel: efifb: scrolling: redraw Dec 16 12:28:32.118394 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 12:28:32.118399 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:28:32.118404 kernel: fb0: EFI VGA frame buffer device Dec 16 12:28:32.118408 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 16 12:28:32.118413 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:28:32.118418 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:28:32.118424 kernel: watchdog: NMI not fully supported Dec 16 12:28:32.118428 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:28:32.118433 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:28:32.118438 kernel: Segment Routing with IPv6 Dec 16 12:28:32.118443 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:28:32.118447 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:28:32.118452 kernel: Key type dns_resolver registered Dec 16 12:28:32.118457 kernel: registered taskstats version 1 Dec 16 12:28:32.118461 kernel: Loading compiled-in X.509 certificates Dec 16 12:28:32.118466 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:28:32.118518 kernel: Demotion targets for Node 0: null Dec 16 12:28:32.118524 kernel: Key type .fscrypt registered Dec 16 12:28:32.118528 kernel: Key type fscrypt-provisioning registered Dec 16 12:28:32.118533 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:28:32.118538 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:28:32.118543 kernel: ima: No architecture policies found Dec 16 12:28:32.118548 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:28:32.118552 kernel: clk: Disabling unused clocks Dec 16 12:28:32.118557 kernel: PM: genpd: Disabling unused power domains Dec 16 12:28:32.118563 kernel: Warning: unable to open an initial console. Dec 16 12:28:32.118568 kernel: Freeing unused kernel memory: 39552K Dec 16 12:28:32.118573 kernel: Run /init as init process Dec 16 12:28:32.118578 kernel: with arguments: Dec 16 12:28:32.118582 kernel: /init Dec 16 12:28:32.118587 kernel: with environment: Dec 16 12:28:32.118591 kernel: HOME=/ Dec 16 12:28:32.118596 kernel: TERM=linux Dec 16 12:28:32.118602 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:28:32.118610 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:28:32.118616 systemd[1]: Detected virtualization microsoft. Dec 16 12:28:32.118621 systemd[1]: Detected architecture arm64. Dec 16 12:28:32.118626 systemd[1]: Running in initrd. Dec 16 12:28:32.118631 systemd[1]: No hostname configured, using default hostname. Dec 16 12:28:32.118636 systemd[1]: Hostname set to . Dec 16 12:28:32.118641 systemd[1]: Initializing machine ID from random generator. Dec 16 12:28:32.118647 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:28:32.118652 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:28:32.118658 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:28:32.118663 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:28:32.118669 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:28:32.118674 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:28:32.118680 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:28:32.118687 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:28:32.118692 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:28:32.118697 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:28:32.118702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:28:32.118708 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:28:32.118713 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:28:32.118718 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:28:32.118723 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:28:32.118729 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:28:32.118735 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:28:32.118740 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:28:32.118745 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:28:32.118750 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:28:32.118755 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:28:32.118760 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:28:32.118766 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:28:32.118772 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:28:32.118777 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:28:32.118782 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:28:32.118787 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:28:32.118792 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:28:32.118798 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:28:32.118803 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:28:32.118822 systemd-journald[225]: Collecting audit messages is disabled. Dec 16 12:28:32.118836 systemd-journald[225]: Journal started Dec 16 12:28:32.118850 systemd-journald[225]: Runtime Journal (/run/log/journal/a33013e24f784892b38ba9e5a8cac239) is 8M, max 78.3M, 70.3M free. Dec 16 12:28:32.121709 systemd-modules-load[227]: Inserted module 'overlay' Dec 16 12:28:32.134725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:32.148484 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:28:32.148514 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:28:32.159811 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:28:32.172431 kernel: Bridge firewalling registered Dec 16 12:28:32.160661 systemd-modules-load[227]: Inserted module 'br_netfilter' Dec 16 12:28:32.168517 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:28:32.178122 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:28:32.182251 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:28:32.195646 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:28:32.211596 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:28:32.234594 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:28:32.241693 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:32.258795 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:28:32.264019 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:28:32.271642 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:28:32.277107 systemd-tmpfiles[247]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:28:32.296608 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:28:32.304030 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:28:32.317768 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:28:32.343845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:28:32.357066 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:28:32.363398 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:28:32.392630 systemd-resolved[256]: Positive Trust Anchors: Dec 16 12:28:32.392644 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:28:32.392665 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:28:32.394376 systemd-resolved[256]: Defaulting to hostname 'linux'. Dec 16 12:28:32.395036 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:28:32.408210 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:28:32.455784 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:28:32.536499 kernel: SCSI subsystem initialized Dec 16 12:28:32.541482 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:28:32.549491 kernel: iscsi: registered transport (tcp) Dec 16 12:28:32.561907 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:28:32.561918 kernel: QLogic iSCSI HBA Driver Dec 16 12:28:32.575973 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:28:32.594894 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:28:32.607445 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:28:32.649392 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:28:32.654417 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:28:32.717489 kernel: raid6: neonx8 gen() 18574 MB/s Dec 16 12:28:32.736481 kernel: raid6: neonx4 gen() 18569 MB/s Dec 16 12:28:32.755479 kernel: raid6: neonx2 gen() 17085 MB/s Dec 16 12:28:32.775479 kernel: raid6: neonx1 gen() 14994 MB/s Dec 16 12:28:32.794478 kernel: raid6: int64x8 gen() 10529 MB/s Dec 16 12:28:32.813478 kernel: raid6: int64x4 gen() 10605 MB/s Dec 16 12:28:32.833478 kernel: raid6: int64x2 gen() 8982 MB/s Dec 16 12:28:32.854709 kernel: raid6: int64x1 gen() 7020 MB/s Dec 16 12:28:32.854780 kernel: raid6: using algorithm neonx8 gen() 18574 MB/s Dec 16 12:28:32.877383 kernel: raid6: .... xor() 14908 MB/s, rmw enabled Dec 16 12:28:32.877391 kernel: raid6: using neon recovery algorithm Dec 16 12:28:32.887054 kernel: xor: measuring software checksum speed Dec 16 12:28:32.887066 kernel: 8regs : 28647 MB/sec Dec 16 12:28:32.889777 kernel: 32regs : 28822 MB/sec Dec 16 12:28:32.892401 kernel: arm64_neon : 37566 MB/sec Dec 16 12:28:32.895315 kernel: xor: using function: arm64_neon (37566 MB/sec) Dec 16 12:28:32.933491 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:28:32.939129 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:28:32.949021 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:28:32.975635 systemd-udevd[475]: Using default interface naming scheme 'v255'. Dec 16 12:28:32.980156 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:28:32.993627 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:28:33.027906 dracut-pre-trigger[490]: rd.md=0: removing MD RAID activation Dec 16 12:28:33.052052 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:28:33.058417 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:28:33.104444 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:28:33.118218 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:28:33.183616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:33.185838 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:33.203895 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:33.220893 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 12:28:33.220911 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 12:28:33.217681 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:33.246159 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 12:28:33.246177 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 12:28:33.246303 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 12:28:33.231388 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:33.257399 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 12:28:33.247365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:33.298944 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 12:28:33.298961 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 12:28:33.298968 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 12:28:33.298975 kernel: scsi host0: storvsc_host_t Dec 16 12:28:33.299110 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 12:28:33.299117 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 16 12:28:33.247436 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:33.319145 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 12:28:33.319340 kernel: scsi host1: storvsc_host_t Dec 16 12:28:33.283267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:33.332512 kernel: PTP clock support registered Dec 16 12:28:33.342980 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 12:28:33.343013 kernel: hv_vmbus: registering driver hv_utils Dec 16 12:28:33.600274 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 12:28:33.600307 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 12:28:33.600314 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 16 12:28:33.352409 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:33.617600 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 12:28:33.617619 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 16 12:28:33.617755 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 12:28:33.617821 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 16 12:28:33.617882 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 16 12:28:33.604290 systemd-resolved[256]: Clock change detected. Flushing caches. Dec 16 12:28:33.643919 kernel: hv_netvsc 000d3ac3-688e-000d-3ac3-688e000d3ac3 eth0: VF slot 1 added Dec 16 12:28:33.644049 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#253 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:33.644110 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:33.656923 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:28:33.656952 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 12:28:33.668430 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 12:28:33.668576 kernel: hv_vmbus: registering driver hv_pci Dec 16 12:28:33.668585 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 12:28:33.668593 kernel: hv_pci 8ed44019-6e27-4e01-90bb-b70cbc0e3ee6: PCI VMBus probing: Using version 0x10004 Dec 16 12:28:33.674317 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 12:28:33.685458 kernel: hv_pci 8ed44019-6e27-4e01-90bb-b70cbc0e3ee6: PCI host bridge to bus 6e27:00 Dec 16 12:28:33.685679 kernel: pci_bus 6e27:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 16 12:28:33.685777 kernel: pci_bus 6e27:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 12:28:33.702752 kernel: pci 6e27:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Dec 16 12:28:33.702821 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#254 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:28:33.702946 kernel: pci 6e27:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 16 12:28:33.712915 kernel: pci 6e27:00:02.0: enabling Extended Tags Dec 16 12:28:33.725574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#230 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:28:33.725723 kernel: pci 6e27:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6e27:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Dec 16 12:28:33.747772 kernel: pci_bus 6e27:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 12:28:33.747932 kernel: pci 6e27:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Dec 16 12:28:33.805528 kernel: mlx5_core 6e27:00:02.0: enabling device (0000 -> 0002) Dec 16 12:28:33.813586 kernel: mlx5_core 6e27:00:02.0: PTM is not supported by PCIe Dec 16 12:28:33.813684 kernel: mlx5_core 6e27:00:02.0: firmware version: 16.30.5006 Dec 16 12:28:34.000542 kernel: hv_netvsc 000d3ac3-688e-000d-3ac3-688e000d3ac3 eth0: VF registering: eth1 Dec 16 12:28:34.000732 kernel: mlx5_core 6e27:00:02.0 eth1: joined to eth0 Dec 16 12:28:34.006546 kernel: mlx5_core 6e27:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 16 12:28:34.017618 kernel: mlx5_core 6e27:00:02.0 enP28199s1: renamed from eth1 Dec 16 12:28:34.072592 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 16 12:28:34.191816 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 16 12:28:34.208650 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:28:34.232510 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 16 12:28:34.245311 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 16 12:28:34.251262 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:28:34.264668 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:28:34.273952 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:28:34.284101 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:28:34.294761 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:28:34.324266 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:28:34.342621 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:34.348483 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:28:34.361193 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:28:35.371585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#222 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:35.385611 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:28:35.387563 disk-uuid[660]: The operation has completed successfully. Dec 16 12:28:35.457146 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:28:35.457244 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:28:35.482697 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:28:35.503876 sh[825]: Success Dec 16 12:28:35.536352 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:28:35.536392 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:28:35.543693 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:28:35.551570 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:28:35.786058 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:28:35.795902 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:28:35.803842 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:28:35.833950 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (843) Dec 16 12:28:35.833987 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:28:35.838861 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:36.076617 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:28:36.076709 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:28:36.104371 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:28:36.108714 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:28:36.117244 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:28:36.117947 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:28:36.141217 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:28:36.173788 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (866) Dec 16 12:28:36.184273 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:36.184311 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:36.208678 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:28:36.208728 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:28:36.217587 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:36.218154 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:28:36.227964 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:28:36.264655 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:28:36.278343 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:28:36.310154 systemd-networkd[1012]: lo: Link UP Dec 16 12:28:36.310163 systemd-networkd[1012]: lo: Gained carrier Dec 16 12:28:36.310924 systemd-networkd[1012]: Enumeration completed Dec 16 12:28:36.312790 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:28:36.316066 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:36.316069 systemd-networkd[1012]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:28:36.320861 systemd[1]: Reached target network.target - Network. Dec 16 12:28:36.409584 kernel: mlx5_core 6e27:00:02.0 enP28199s1: Link up Dec 16 12:28:36.442159 systemd-networkd[1012]: enP28199s1: Link UP Dec 16 12:28:36.445830 kernel: hv_netvsc 000d3ac3-688e-000d-3ac3-688e000d3ac3 eth0: Data path switched to VF: enP28199s1 Dec 16 12:28:36.442219 systemd-networkd[1012]: eth0: Link UP Dec 16 12:28:36.442308 systemd-networkd[1012]: eth0: Gained carrier Dec 16 12:28:36.442321 systemd-networkd[1012]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:36.461933 systemd-networkd[1012]: enP28199s1: Gained carrier Dec 16 12:28:36.471592 systemd-networkd[1012]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:28:37.230397 ignition[961]: Ignition 2.22.0 Dec 16 12:28:37.230414 ignition[961]: Stage: fetch-offline Dec 16 12:28:37.234841 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:28:37.230507 ignition[961]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:37.243975 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:28:37.230513 ignition[961]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:37.230602 ignition[961]: parsed url from cmdline: "" Dec 16 12:28:37.230605 ignition[961]: no config URL provided Dec 16 12:28:37.230608 ignition[961]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:28:37.230614 ignition[961]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:28:37.230618 ignition[961]: failed to fetch config: resource requires networking Dec 16 12:28:37.230901 ignition[961]: Ignition finished successfully Dec 16 12:28:37.273695 ignition[1026]: Ignition 2.22.0 Dec 16 12:28:37.273700 ignition[1026]: Stage: fetch Dec 16 12:28:37.273893 ignition[1026]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:37.273900 ignition[1026]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:37.273968 ignition[1026]: parsed url from cmdline: "" Dec 16 12:28:37.273970 ignition[1026]: no config URL provided Dec 16 12:28:37.273974 ignition[1026]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:28:37.273979 ignition[1026]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:28:37.273993 ignition[1026]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 12:28:37.351806 ignition[1026]: GET result: OK Dec 16 12:28:37.351872 ignition[1026]: config has been read from IMDS userdata Dec 16 12:28:37.351890 ignition[1026]: parsing config with SHA512: 275e2254424ae38191e70d620fc297620a950bfbaf19dfee93f5b0fdae155937821644c2b3a89106c05b08bc0e3e72fba98c7855dc9df9d9023a13deac4c57cd Dec 16 12:28:37.357081 unknown[1026]: fetched base config from "system" Dec 16 12:28:37.357282 ignition[1026]: fetch: fetch complete Dec 16 12:28:37.357086 unknown[1026]: fetched base config from "system" Dec 16 12:28:37.357286 ignition[1026]: fetch: fetch passed Dec 16 12:28:37.357089 unknown[1026]: fetched user config from "azure" Dec 16 12:28:37.357319 ignition[1026]: Ignition finished successfully Dec 16 12:28:37.359250 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:28:37.367045 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:28:37.402838 ignition[1033]: Ignition 2.22.0 Dec 16 12:28:37.405348 ignition[1033]: Stage: kargs Dec 16 12:28:37.405532 ignition[1033]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:37.405539 ignition[1033]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:37.409485 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:28:37.406081 ignition[1033]: kargs: kargs passed Dec 16 12:28:37.414854 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:28:37.406119 ignition[1033]: Ignition finished successfully Dec 16 12:28:37.448506 ignition[1039]: Ignition 2.22.0 Dec 16 12:28:37.448523 ignition[1039]: Stage: disks Dec 16 12:28:37.448702 ignition[1039]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:37.453780 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:28:37.448709 ignition[1039]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:37.459179 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:28:37.449181 ignition[1039]: disks: disks passed Dec 16 12:28:37.463752 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:28:37.449219 ignition[1039]: Ignition finished successfully Dec 16 12:28:37.472290 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:28:37.480242 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:28:37.488547 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:28:37.498063 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:28:37.804405 systemd-fsck[1047]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Dec 16 12:28:37.813606 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:28:37.820524 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:28:38.040597 kernel: EXT4-fs (sda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:28:38.041143 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:28:38.048220 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:28:38.067479 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:28:38.075860 systemd-networkd[1012]: eth0: Gained IPv6LL Dec 16 12:28:38.085115 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:28:38.093870 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 12:28:38.104674 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:28:38.104707 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:28:38.110388 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:28:38.124114 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:28:38.150680 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1061) Dec 16 12:28:38.161283 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:38.161309 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:38.171430 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:28:38.171478 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:28:38.173530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:28:38.854366 coreos-metadata[1063]: Dec 16 12:28:38.854 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:28:38.862693 coreos-metadata[1063]: Dec 16 12:28:38.862 INFO Fetch successful Dec 16 12:28:38.866794 coreos-metadata[1063]: Dec 16 12:28:38.864 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:28:38.875516 coreos-metadata[1063]: Dec 16 12:28:38.871 INFO Fetch successful Dec 16 12:28:38.875516 coreos-metadata[1063]: Dec 16 12:28:38.871 INFO wrote hostname ci-4459.2.2-a-719f16aeb7 to /sysroot/etc/hostname Dec 16 12:28:38.876059 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:28:39.043107 initrd-setup-root[1091]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:28:39.078555 initrd-setup-root[1098]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:28:39.097451 initrd-setup-root[1105]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:28:39.103689 initrd-setup-root[1112]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:28:39.861027 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:28:39.866991 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:28:39.890059 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:28:39.902926 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:28:39.913207 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:39.931815 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:28:39.941178 ignition[1180]: INFO : Ignition 2.22.0 Dec 16 12:28:39.941178 ignition[1180]: INFO : Stage: mount Dec 16 12:28:39.941178 ignition[1180]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:39.941178 ignition[1180]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:39.941178 ignition[1180]: INFO : mount: mount passed Dec 16 12:28:39.941178 ignition[1180]: INFO : Ignition finished successfully Dec 16 12:28:39.943845 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:28:39.950439 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:28:39.982670 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:28:40.019576 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1192) Dec 16 12:28:40.030602 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:40.030646 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:40.040647 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:28:40.040672 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:28:40.042320 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:28:40.071114 ignition[1210]: INFO : Ignition 2.22.0 Dec 16 12:28:40.071114 ignition[1210]: INFO : Stage: files Dec 16 12:28:40.077398 ignition[1210]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:40.077398 ignition[1210]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:40.077398 ignition[1210]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:28:40.091802 ignition[1210]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:28:40.091802 ignition[1210]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:28:40.247488 ignition[1210]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:28:40.253105 ignition[1210]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:28:40.253105 ignition[1210]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:28:40.247868 unknown[1210]: wrote ssh authorized keys file for user: core Dec 16 12:28:40.327993 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:28:40.336485 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:28:40.524904 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:28:40.707095 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:28:40.715018 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:28:40.715018 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:28:40.715018 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:28:40.715018 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:28:40.715018 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:28:40.715018 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:28:40.715018 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:28:40.715018 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:28:40.774590 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:28:40.774590 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:28:40.774590 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:28:40.774590 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:28:40.774590 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:28:40.774590 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 16 12:28:41.169970 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:28:41.384533 ignition[1210]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 16 12:28:41.384533 ignition[1210]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:28:41.416607 ignition[1210]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:28:41.431567 ignition[1210]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:28:41.431567 ignition[1210]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:28:41.447402 ignition[1210]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:28:41.447402 ignition[1210]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:28:41.447402 ignition[1210]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:28:41.447402 ignition[1210]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:28:41.447402 ignition[1210]: INFO : files: files passed Dec 16 12:28:41.447402 ignition[1210]: INFO : Ignition finished successfully Dec 16 12:28:41.441958 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:28:41.452942 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:28:41.477026 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:28:41.488814 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:28:41.497934 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:28:41.537583 initrd-setup-root-after-ignition[1239]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:28:41.537583 initrd-setup-root-after-ignition[1239]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:28:41.553784 initrd-setup-root-after-ignition[1243]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:28:41.551614 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:28:41.559909 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:28:41.572417 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:28:41.618220 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:28:41.618326 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:28:41.629051 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:28:41.638943 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:28:41.648307 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:28:41.648956 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:28:41.686104 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:28:41.692993 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:28:41.713678 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:28:41.719438 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:28:41.729816 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:28:41.738882 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:28:41.738973 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:28:41.752433 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:28:41.757655 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:28:41.767415 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:28:41.777400 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:28:41.787347 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:28:41.798002 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:28:41.808686 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:28:41.819547 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:28:41.830433 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:28:41.839422 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:28:41.850458 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:28:41.858771 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:28:41.858889 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:28:41.871714 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:28:41.881529 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:28:41.891791 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:28:41.891855 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:28:41.902456 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:28:41.902564 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:28:41.960122 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:28:41.960224 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:28:41.966128 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:28:41.966202 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:28:41.976927 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 12:28:41.977000 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:28:42.050632 ignition[1263]: INFO : Ignition 2.22.0 Dec 16 12:28:42.050632 ignition[1263]: INFO : Stage: umount Dec 16 12:28:42.050632 ignition[1263]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:42.050632 ignition[1263]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:42.050632 ignition[1263]: INFO : umount: umount passed Dec 16 12:28:42.050632 ignition[1263]: INFO : Ignition finished successfully Dec 16 12:28:41.987675 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:28:42.003254 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:28:42.003368 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:28:42.022724 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:28:42.029802 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:28:42.034865 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:28:42.046415 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:28:42.046496 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:28:42.056642 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:28:42.058005 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:28:42.066227 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:28:42.066317 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:28:42.076028 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:28:42.076079 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:28:42.093058 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:28:42.093105 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:28:42.105853 systemd[1]: Stopped target network.target - Network. Dec 16 12:28:42.113920 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:28:42.113973 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:28:42.123598 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:28:42.132127 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:28:42.137478 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:28:42.143204 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:28:42.151161 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:28:42.161365 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:28:42.161413 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:28:42.170935 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:28:42.170967 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:28:42.180173 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:28:42.180223 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:28:42.189780 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:28:42.189807 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:28:42.199036 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:28:42.208418 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:28:42.218442 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:28:42.218940 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:28:42.219007 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:28:42.436591 kernel: hv_netvsc 000d3ac3-688e-000d-3ac3-688e000d3ac3 eth0: Data path switched from VF: enP28199s1 Dec 16 12:28:42.229022 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:28:42.229150 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:28:42.245139 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:28:42.245309 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:28:42.245401 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:28:42.259808 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:28:42.261065 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:28:42.270685 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:28:42.270720 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:28:42.281838 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:28:42.296687 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:28:42.296745 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:28:42.307754 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:28:42.307798 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:28:42.323634 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:28:42.323672 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:28:42.328329 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:28:42.328358 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:28:42.341029 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:28:42.351048 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:28:42.351096 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:42.379153 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:28:42.379319 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:28:42.390809 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:28:42.390844 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:28:42.400528 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:28:42.400550 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:28:42.410710 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:28:42.410748 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:28:42.430954 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:28:42.431008 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:28:42.445925 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:28:42.445979 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:28:42.464368 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:28:42.474814 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:28:42.474880 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:28:42.490868 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:28:42.490907 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:28:42.497012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:42.497052 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:42.515236 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 12:28:42.515278 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 12:28:42.515307 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:42.515543 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:28:42.515644 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:28:42.524296 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:28:42.524355 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:28:42.719704 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:28:42.719812 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:28:42.727882 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:28:42.735894 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:28:42.735945 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:28:42.745475 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:28:42.771070 systemd[1]: Switching root. Dec 16 12:28:42.896270 systemd-journald[225]: Journal stopped Dec 16 12:28:46.525128 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Dec 16 12:28:46.525147 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:28:46.525155 kernel: SELinux: policy capability open_perms=1 Dec 16 12:28:46.525161 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:28:46.525167 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:28:46.525172 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:28:46.525178 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:28:46.525184 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:28:46.525189 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:28:46.525194 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:28:46.525199 kernel: audit: type=1403 audit(1765888123.547:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:28:46.525206 systemd[1]: Successfully loaded SELinux policy in 157.839ms. Dec 16 12:28:46.525213 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.412ms. Dec 16 12:28:46.525220 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:28:46.525226 systemd[1]: Detected virtualization microsoft. Dec 16 12:28:46.525233 systemd[1]: Detected architecture arm64. Dec 16 12:28:46.525239 systemd[1]: Detected first boot. Dec 16 12:28:46.525245 systemd[1]: Hostname set to . Dec 16 12:28:46.525251 systemd[1]: Initializing machine ID from random generator. Dec 16 12:28:46.525257 zram_generator::config[1307]: No configuration found. Dec 16 12:28:46.525264 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:28:46.525270 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:28:46.525276 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:28:46.525283 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:28:46.525289 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:28:46.525295 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:28:46.525301 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:28:46.525307 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:28:46.525313 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:28:46.525319 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:28:46.525326 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:28:46.525332 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:28:46.525339 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:28:46.525345 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:28:46.525351 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:28:46.525357 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:28:46.525363 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:28:46.525369 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:28:46.525377 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:28:46.525383 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:28:46.525391 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:28:46.525398 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:28:46.525404 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:28:46.525410 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:28:46.525416 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:28:46.525422 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:28:46.525429 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:28:46.525436 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:28:46.525442 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:28:46.525448 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:28:46.525454 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:28:46.525460 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:28:46.525466 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:28:46.525474 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:28:46.525480 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:28:46.525486 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:28:46.525492 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:28:46.525499 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:28:46.525505 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:28:46.525512 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:28:46.525518 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:28:46.525525 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:28:46.525532 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:28:46.525538 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:28:46.525544 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:28:46.525550 systemd[1]: Reached target machines.target - Containers. Dec 16 12:28:46.525568 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:28:46.525576 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:28:46.525582 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:28:46.525588 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:28:46.525595 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:28:46.525601 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:28:46.525607 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:28:46.525613 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:28:46.525619 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:28:46.525627 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:28:46.525633 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:28:46.525640 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:28:46.525646 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:28:46.525652 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:28:46.525659 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:28:46.525665 kernel: fuse: init (API version 7.41) Dec 16 12:28:46.525671 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:28:46.525678 kernel: loop: module loaded Dec 16 12:28:46.525684 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:28:46.525691 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:28:46.525709 systemd-journald[1404]: Collecting audit messages is disabled. Dec 16 12:28:46.525723 kernel: ACPI: bus type drm_connector registered Dec 16 12:28:46.525730 systemd-journald[1404]: Journal started Dec 16 12:28:46.525744 systemd-journald[1404]: Runtime Journal (/run/log/journal/a3b19f53ca9a4bf3af05b5b6db1e72a5) is 8M, max 78.3M, 70.3M free. Dec 16 12:28:45.776826 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:28:45.782998 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 12:28:45.783311 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:28:45.783610 systemd[1]: systemd-journald.service: Consumed 2.637s CPU time. Dec 16 12:28:46.533097 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:28:46.551303 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:28:46.569207 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:28:46.576343 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:28:46.576364 systemd[1]: Stopped verity-setup.service. Dec 16 12:28:46.590738 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:28:46.591368 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:28:46.595841 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:28:46.600772 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:28:46.605054 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:28:46.609851 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:28:46.614347 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:28:46.618545 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:28:46.626045 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:28:46.631238 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:28:46.632599 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:28:46.638073 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:28:46.638192 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:28:46.642963 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:28:46.643097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:28:46.647902 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:28:46.649606 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:28:46.655106 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:28:46.655223 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:28:46.660015 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:28:46.660122 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:28:46.665074 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:28:46.670019 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:28:46.675972 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:28:46.681944 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:28:46.688582 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:28:46.702244 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:28:46.708533 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:28:46.718644 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:28:46.723889 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:28:46.723916 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:28:46.730027 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:28:46.736433 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:28:46.741675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:28:46.750643 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:28:46.756688 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:28:46.762236 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:28:46.769679 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:28:46.774994 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:28:46.775721 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:28:46.781120 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:28:46.788186 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:28:46.794492 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:28:46.800070 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:28:46.804700 systemd-journald[1404]: Time spent on flushing to /var/log/journal/a3b19f53ca9a4bf3af05b5b6db1e72a5 is 24.532ms for 934 entries. Dec 16 12:28:46.804700 systemd-journald[1404]: System Journal (/var/log/journal/a3b19f53ca9a4bf3af05b5b6db1e72a5) is 8M, max 2.6G, 2.6G free. Dec 16 12:28:46.862615 systemd-journald[1404]: Received client request to flush runtime journal. Dec 16 12:28:46.862664 kernel: loop0: detected capacity change from 0 to 27936 Dec 16 12:28:46.821803 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:28:46.831021 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:28:46.840438 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:28:46.864281 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:28:46.889658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:28:46.952032 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:28:46.953145 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:28:46.976001 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:28:46.984707 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:28:47.076119 systemd-tmpfiles[1460]: ACLs are not supported, ignoring. Dec 16 12:28:47.076132 systemd-tmpfiles[1460]: ACLs are not supported, ignoring. Dec 16 12:28:47.078962 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:28:47.210587 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:28:47.257599 kernel: loop1: detected capacity change from 0 to 200800 Dec 16 12:28:47.375592 kernel: loop2: detected capacity change from 0 to 119840 Dec 16 12:28:47.422358 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:28:47.428935 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:28:47.454317 systemd-udevd[1467]: Using default interface naming scheme 'v255'. Dec 16 12:28:47.613993 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:28:47.624735 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:28:47.690697 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:28:47.699971 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:28:47.740620 kernel: loop3: detected capacity change from 0 to 100632 Dec 16 12:28:47.781622 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:28:47.781718 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#281 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:28:47.782306 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:28:47.806807 kernel: hv_vmbus: registering driver hv_balloon Dec 16 12:28:47.806876 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 12:28:47.810574 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 16 12:28:47.815571 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 12:28:47.815624 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 12:28:47.819275 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 12:28:47.830144 kernel: Console: switching to colour dummy device 80x25 Dec 16 12:28:47.837395 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:28:47.908762 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:47.921812 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:47.921957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:47.930969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:47.970580 kernel: MACsec IEEE 802.1AE Dec 16 12:28:48.006830 systemd-networkd[1485]: lo: Link UP Dec 16 12:28:48.007100 systemd-networkd[1485]: lo: Gained carrier Dec 16 12:28:48.008108 systemd-networkd[1485]: Enumeration completed Dec 16 12:28:48.008707 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:28:48.008987 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:48.009065 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:28:48.019191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:28:48.035186 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:28:48.048280 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:28:48.054684 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:28:48.063650 kernel: mlx5_core 6e27:00:02.0 enP28199s1: Link up Dec 16 12:28:48.085643 kernel: hv_netvsc 000d3ac3-688e-000d-3ac3-688e000d3ac3 eth0: Data path switched to VF: enP28199s1 Dec 16 12:28:48.086007 systemd-networkd[1485]: enP28199s1: Link UP Dec 16 12:28:48.086177 systemd-networkd[1485]: eth0: Link UP Dec 16 12:28:48.086180 systemd-networkd[1485]: eth0: Gained carrier Dec 16 12:28:48.086200 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:48.089757 systemd-networkd[1485]: enP28199s1: Gained carrier Dec 16 12:28:48.097621 systemd-networkd[1485]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:28:48.101614 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:28:48.120074 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:28:48.130591 kernel: loop4: detected capacity change from 0 to 27936 Dec 16 12:28:48.145024 kernel: loop5: detected capacity change from 0 to 200800 Dec 16 12:28:48.165577 kernel: loop6: detected capacity change from 0 to 119840 Dec 16 12:28:48.178601 kernel: loop7: detected capacity change from 0 to 100632 Dec 16 12:28:48.186233 (sd-merge)[1609]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 16 12:28:48.186666 (sd-merge)[1609]: Merged extensions into '/usr'. Dec 16 12:28:48.195863 systemd[1]: Reload requested from client PID 1446 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:28:48.195881 systemd[1]: Reloading... Dec 16 12:28:48.233701 zram_generator::config[1636]: No configuration found. Dec 16 12:28:48.452693 systemd[1]: Reloading finished in 256 ms. Dec 16 12:28:48.482016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:48.487363 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:28:48.504529 systemd[1]: Starting ensure-sysext.service... Dec 16 12:28:48.510679 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:28:48.524664 systemd[1]: Reload requested from client PID 1699 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:28:48.524677 systemd[1]: Reloading... Dec 16 12:28:48.553032 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:28:48.553384 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:28:48.553718 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:28:48.554899 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:28:48.555461 systemd-tmpfiles[1700]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:28:48.555734 systemd-tmpfiles[1700]: ACLs are not supported, ignoring. Dec 16 12:28:48.555861 systemd-tmpfiles[1700]: ACLs are not supported, ignoring. Dec 16 12:28:48.579137 systemd-tmpfiles[1700]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:28:48.580028 systemd-tmpfiles[1700]: Skipping /boot Dec 16 12:28:48.588613 zram_generator::config[1731]: No configuration found. Dec 16 12:28:48.589530 systemd-tmpfiles[1700]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:28:48.589652 systemd-tmpfiles[1700]: Skipping /boot Dec 16 12:28:48.740783 systemd[1]: Reloading finished in 215 ms. Dec 16 12:28:48.756670 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:28:48.773699 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:28:48.787686 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:28:48.793274 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:28:48.801230 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:28:48.806714 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:28:48.818658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:28:48.820819 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:28:48.830536 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:28:48.841936 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:28:48.851637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:28:48.857171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:28:48.857329 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:28:48.857511 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:28:48.866903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:28:48.867133 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:28:48.874845 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:28:48.874979 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:28:48.880014 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:28:48.880150 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:28:48.886083 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:28:48.886213 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:28:48.895463 systemd[1]: Finished ensure-sysext.service. Dec 16 12:28:48.902950 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:28:48.903060 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:28:48.904433 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:28:48.915164 systemd-resolved[1791]: Positive Trust Anchors: Dec 16 12:28:48.915176 systemd-resolved[1791]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:28:48.915197 systemd-resolved[1791]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:28:48.919072 systemd-resolved[1791]: Using system hostname 'ci-4459.2.2-a-719f16aeb7'. Dec 16 12:28:48.920313 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:28:48.926261 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:28:48.931239 systemd[1]: Reached target network.target - Network. Dec 16 12:28:48.935587 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:28:49.020109 augenrules[1824]: No rules Dec 16 12:28:49.021138 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:28:49.021357 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:28:49.625249 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:28:49.631043 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:28:49.787715 systemd-networkd[1485]: eth0: Gained IPv6LL Dec 16 12:28:49.790637 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:28:49.796457 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:28:52.914149 ldconfig[1441]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:28:53.215313 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:28:53.222016 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:28:53.233843 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:28:53.239353 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:28:53.244377 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:28:53.249941 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:28:53.255525 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:28:53.260362 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:28:53.266365 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:28:53.273341 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:28:53.273372 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:28:53.278091 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:28:53.567672 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:28:53.574218 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:28:53.579903 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:28:53.585839 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:28:53.591304 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:28:53.597555 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:28:53.602915 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:28:53.608446 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:28:53.613389 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:28:53.617767 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:28:53.621728 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:28:53.621753 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:28:53.623990 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 12:28:53.634664 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:28:53.645706 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:28:53.651753 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:28:53.661732 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:28:53.675660 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:28:53.681492 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:28:53.686379 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:28:53.689724 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 12:28:53.694005 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 12:28:53.700184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:28:53.705578 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:28:53.710592 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:28:53.717666 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:28:53.723306 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:28:53.729938 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:28:53.737681 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:28:53.742967 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:28:53.743615 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:28:53.745714 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:28:53.759248 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:28:53.876382 chronyd[1837]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 12:28:53.877672 KVP[1847]: KVP starting; pid is:1847 Dec 16 12:28:53.881590 KVP[1847]: KVP LIC Version: 3.1 Dec 16 12:28:53.883573 kernel: hv_utils: KVP IC version 4.0 Dec 16 12:28:53.967672 jq[1845]: false Dec 16 12:28:53.967761 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:28:53.975634 extend-filesystems[1846]: Found /dev/sda6 Dec 16 12:28:53.975291 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:28:53.981667 jq[1863]: true Dec 16 12:28:53.975456 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:28:53.987927 jq[1872]: true Dec 16 12:28:54.002320 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:28:54.003319 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:28:54.061576 chronyd[1837]: Timezone right/UTC failed leap second check, ignoring Dec 16 12:28:54.062046 chronyd[1837]: Loaded seccomp filter (level 2) Dec 16 12:28:54.062192 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 12:28:54.311972 extend-filesystems[1846]: Found /dev/sda9 Dec 16 12:28:54.316619 extend-filesystems[1846]: Checking size of /dev/sda9 Dec 16 12:28:54.334276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:28:54.341880 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:28:54.476019 systemd-logind[1858]: New seat seat0. Dec 16 12:28:54.477615 systemd-logind[1858]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 12:28:54.478004 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:28:54.483185 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:28:54.523833 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:28:54.524015 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:28:54.529358 tar[1866]: linux-arm64/LICENSE Dec 16 12:28:54.530708 tar[1866]: linux-arm64/helm Dec 16 12:28:54.531861 (ntainerd)[1921]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:28:55.102322 kubelet[1903]: E1216 12:28:54.679154 1903 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:28:54.576218 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:28:55.102588 extend-filesystems[1846]: Old size kept for /dev/sda9 Dec 16 12:28:55.149600 update_engine[1861]: I20251216 12:28:54.575721 1861 main.cc:92] Flatcar Update Engine starting Dec 16 12:28:54.576395 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:28:54.681155 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:28:54.681259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:28:54.681516 systemd[1]: kubelet.service: Consumed 496ms CPU time, 247.3M memory peak. Dec 16 12:28:55.163986 dbus-daemon[1840]: [system] SELinux support is enabled Dec 16 12:28:55.164339 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:28:55.169589 update_engine[1861]: I20251216 12:28:55.169526 1861 update_check_scheduler.cc:74] Next update check in 9m46s Dec 16 12:28:55.174163 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:28:55.175612 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:28:55.176717 dbus-daemon[1840]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:28:55.183815 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:28:55.183832 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:28:55.191863 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:28:55.201774 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:28:55.611792 coreos-metadata[1839]: Dec 16 12:28:55.566 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:28:55.611792 coreos-metadata[1839]: Dec 16 12:28:55.570 INFO Fetch successful Dec 16 12:28:55.611792 coreos-metadata[1839]: Dec 16 12:28:55.570 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 12:28:55.611792 coreos-metadata[1839]: Dec 16 12:28:55.574 INFO Fetch successful Dec 16 12:28:55.611792 coreos-metadata[1839]: Dec 16 12:28:55.574 INFO Fetching http://168.63.129.16/machine/fc513e5d-4389-4b7e-a052-fc746d1725df/c3c94a1c%2Db96f%2D42dc%2D9811%2Dcab1616b4a48.%5Fci%2D4459.2.2%2Da%2D719f16aeb7?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 12:28:55.611792 coreos-metadata[1839]: Dec 16 12:28:55.577 INFO Fetch successful Dec 16 12:28:55.611792 coreos-metadata[1839]: Dec 16 12:28:55.577 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:28:55.611792 coreos-metadata[1839]: Dec 16 12:28:55.586 INFO Fetch successful Dec 16 12:28:55.603600 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:28:55.609226 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:28:55.964633 tar[1866]: linux-arm64/README.md Dec 16 12:28:55.980166 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:28:56.465712 locksmithd[1987]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:28:56.605994 bash[1892]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:28:56.607574 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:28:56.614010 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:28:56.701501 sshd_keygen[1862]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:28:56.716954 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:28:56.722549 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:28:56.733711 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 12:28:56.740173 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:28:56.745740 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:28:56.754798 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:28:56.761683 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 12:28:56.769774 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:28:56.776705 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:28:56.781886 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:28:56.786618 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:28:56.975141 containerd[1921]: time="2025-12-16T12:28:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:28:56.976525 containerd[1921]: time="2025-12-16T12:28:56.976491480Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:28:56.983578 containerd[1921]: time="2025-12-16T12:28:56.983472584Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.64µs" Dec 16 12:28:56.983578 containerd[1921]: time="2025-12-16T12:28:56.983500256Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:28:56.983578 containerd[1921]: time="2025-12-16T12:28:56.983514624Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:28:56.983812 containerd[1921]: time="2025-12-16T12:28:56.983789448Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:28:56.983880 containerd[1921]: time="2025-12-16T12:28:56.983867296Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:28:56.983944 containerd[1921]: time="2025-12-16T12:28:56.983931424Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:28:56.984045 containerd[1921]: time="2025-12-16T12:28:56.984029864Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:28:56.984092 containerd[1921]: time="2025-12-16T12:28:56.984079216Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:28:56.984328 containerd[1921]: time="2025-12-16T12:28:56.984304048Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:28:56.984397 containerd[1921]: time="2025-12-16T12:28:56.984383712Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:28:56.984441 containerd[1921]: time="2025-12-16T12:28:56.984429936Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:28:56.984476 containerd[1921]: time="2025-12-16T12:28:56.984466832Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:28:56.984637 containerd[1921]: time="2025-12-16T12:28:56.984618632Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:28:56.984928 containerd[1921]: time="2025-12-16T12:28:56.984904048Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:28:56.985017 containerd[1921]: time="2025-12-16T12:28:56.985004232Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:28:56.985060 containerd[1921]: time="2025-12-16T12:28:56.985048080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:28:56.985127 containerd[1921]: time="2025-12-16T12:28:56.985114664Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:28:56.985324 containerd[1921]: time="2025-12-16T12:28:56.985306120Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:28:56.985453 containerd[1921]: time="2025-12-16T12:28:56.985439144Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:28:57.554240 containerd[1921]: time="2025-12-16T12:28:57.554167800Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:28:57.554410 containerd[1921]: time="2025-12-16T12:28:57.554262040Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:28:57.554410 containerd[1921]: time="2025-12-16T12:28:57.554277744Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:28:57.554410 containerd[1921]: time="2025-12-16T12:28:57.554287968Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:28:57.554410 containerd[1921]: time="2025-12-16T12:28:57.554296480Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:28:57.554410 containerd[1921]: time="2025-12-16T12:28:57.554302904Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:28:57.554410 containerd[1921]: time="2025-12-16T12:28:57.554314952Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:28:57.554410 containerd[1921]: time="2025-12-16T12:28:57.554322472Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:28:57.554410 containerd[1921]: time="2025-12-16T12:28:57.554330184Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:28:57.554410 containerd[1921]: time="2025-12-16T12:28:57.554336528Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:28:57.554410 containerd[1921]: time="2025-12-16T12:28:57.554342864Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:28:57.554410 containerd[1921]: time="2025-12-16T12:28:57.554351640Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:28:57.554646 containerd[1921]: time="2025-12-16T12:28:57.554500336Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:28:57.554646 containerd[1921]: time="2025-12-16T12:28:57.554519728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:28:57.554646 containerd[1921]: time="2025-12-16T12:28:57.554532336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:28:57.554646 containerd[1921]: time="2025-12-16T12:28:57.554543032Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:28:57.554646 containerd[1921]: time="2025-12-16T12:28:57.554549728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:28:57.554646 containerd[1921]: time="2025-12-16T12:28:57.554571896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:28:57.554646 containerd[1921]: time="2025-12-16T12:28:57.554580400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:28:57.554646 containerd[1921]: time="2025-12-16T12:28:57.554586936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:28:57.554646 containerd[1921]: time="2025-12-16T12:28:57.554593920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:28:57.554646 containerd[1921]: time="2025-12-16T12:28:57.554600160Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:28:57.554646 containerd[1921]: time="2025-12-16T12:28:57.554606440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:28:57.554906 containerd[1921]: time="2025-12-16T12:28:57.554677768Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:28:57.554906 containerd[1921]: time="2025-12-16T12:28:57.554695528Z" level=info msg="Start snapshots syncer" Dec 16 12:28:57.554906 containerd[1921]: time="2025-12-16T12:28:57.554715064Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:28:57.555065 containerd[1921]: time="2025-12-16T12:28:57.555013112Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:28:57.555133 containerd[1921]: time="2025-12-16T12:28:57.555063928Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:28:57.555147 containerd[1921]: time="2025-12-16T12:28:57.555139312Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:28:57.555358 containerd[1921]: time="2025-12-16T12:28:57.555250016Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:28:57.555358 containerd[1921]: time="2025-12-16T12:28:57.555281248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:28:57.555358 containerd[1921]: time="2025-12-16T12:28:57.555293368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:28:57.555358 containerd[1921]: time="2025-12-16T12:28:57.555305088Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:28:57.555358 containerd[1921]: time="2025-12-16T12:28:57.555317192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:28:57.555358 containerd[1921]: time="2025-12-16T12:28:57.555329408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:28:57.555358 containerd[1921]: time="2025-12-16T12:28:57.555338936Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:28:57.555358 containerd[1921]: time="2025-12-16T12:28:57.555362432Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:28:57.555487 containerd[1921]: time="2025-12-16T12:28:57.555374504Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:28:57.555487 containerd[1921]: time="2025-12-16T12:28:57.555385128Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:28:57.555487 containerd[1921]: time="2025-12-16T12:28:57.555411944Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:28:57.555487 containerd[1921]: time="2025-12-16T12:28:57.555426504Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:28:57.555487 containerd[1921]: time="2025-12-16T12:28:57.555434272Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:28:57.555487 containerd[1921]: time="2025-12-16T12:28:57.555442440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:28:57.555487 containerd[1921]: time="2025-12-16T12:28:57.555447512Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:28:57.555487 containerd[1921]: time="2025-12-16T12:28:57.555455456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:28:57.555487 containerd[1921]: time="2025-12-16T12:28:57.555465272Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:28:57.555487 containerd[1921]: time="2025-12-16T12:28:57.555479584Z" level=info msg="runtime interface created" Dec 16 12:28:57.555487 containerd[1921]: time="2025-12-16T12:28:57.555485248Z" level=info msg="created NRI interface" Dec 16 12:28:57.555635 containerd[1921]: time="2025-12-16T12:28:57.555491584Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:28:57.555635 containerd[1921]: time="2025-12-16T12:28:57.555503072Z" level=info msg="Connect containerd service" Dec 16 12:28:57.555635 containerd[1921]: time="2025-12-16T12:28:57.555521304Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:28:57.556550 containerd[1921]: time="2025-12-16T12:28:57.556517016Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:28:58.885541 containerd[1921]: time="2025-12-16T12:28:58.880661480Z" level=info msg="Start subscribing containerd event" Dec 16 12:28:58.885541 containerd[1921]: time="2025-12-16T12:28:58.880742488Z" level=info msg="Start recovering state" Dec 16 12:28:58.885541 containerd[1921]: time="2025-12-16T12:28:58.880847056Z" level=info msg="Start event monitor" Dec 16 12:28:58.885541 containerd[1921]: time="2025-12-16T12:28:58.880860024Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:28:58.885541 containerd[1921]: time="2025-12-16T12:28:58.880871992Z" level=info msg="Start streaming server" Dec 16 12:28:58.885541 containerd[1921]: time="2025-12-16T12:28:58.880881040Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:28:58.885541 containerd[1921]: time="2025-12-16T12:28:58.880887208Z" level=info msg="runtime interface starting up..." Dec 16 12:28:58.885541 containerd[1921]: time="2025-12-16T12:28:58.880890848Z" level=info msg="starting plugins..." Dec 16 12:28:58.885541 containerd[1921]: time="2025-12-16T12:28:58.880893736Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:28:58.885541 containerd[1921]: time="2025-12-16T12:28:58.880937776Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:28:58.885541 containerd[1921]: time="2025-12-16T12:28:58.880903288Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:28:58.885541 containerd[1921]: time="2025-12-16T12:28:58.881043728Z" level=info msg="containerd successfully booted in 1.906277s" Dec 16 12:28:58.881207 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:28:58.886572 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:28:58.894624 systemd[1]: Startup finished in 1.641s (kernel) + 11.512s (initrd) + 15.503s (userspace) = 28.658s. Dec 16 12:29:02.557725 login[2027]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 16 12:29:03.099525 login[2028]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:03.110189 systemd-logind[1858]: New session 2 of user core. Dec 16 12:29:03.111624 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:29:03.112745 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:29:03.130952 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:29:03.132709 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:29:03.143126 (systemd)[2051]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:29:03.144895 systemd-logind[1858]: New session c1 of user core. Dec 16 12:29:03.382898 systemd[2051]: Queued start job for default target default.target. Dec 16 12:29:03.392658 systemd[2051]: Created slice app.slice - User Application Slice. Dec 16 12:29:03.392684 systemd[2051]: Reached target paths.target - Paths. Dec 16 12:29:03.392714 systemd[2051]: Reached target timers.target - Timers. Dec 16 12:29:03.393677 systemd[2051]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:29:03.401411 systemd[2051]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:29:03.401454 systemd[2051]: Reached target sockets.target - Sockets. Dec 16 12:29:03.401485 systemd[2051]: Reached target basic.target - Basic System. Dec 16 12:29:03.401506 systemd[2051]: Reached target default.target - Main User Target. Dec 16 12:29:03.401527 systemd[2051]: Startup finished in 252ms. Dec 16 12:29:03.401788 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:29:03.403766 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:29:03.558117 login[2027]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:03.562329 systemd-logind[1858]: New session 1 of user core. Dec 16 12:29:03.570710 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:29:04.775608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:29:04.776820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:06.345566 waagent[2024]: 2025-12-16T12:29:06.345480Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 12:29:06.349712 waagent[2024]: 2025-12-16T12:29:06.349672Z INFO Daemon Daemon OS: flatcar 4459.2.2 Dec 16 12:29:06.352967 waagent[2024]: 2025-12-16T12:29:06.352937Z INFO Daemon Daemon Python: 3.11.13 Dec 16 12:29:06.356469 waagent[2024]: 2025-12-16T12:29:06.356429Z INFO Daemon Daemon Run daemon Dec 16 12:29:06.359551 waagent[2024]: 2025-12-16T12:29:06.359516Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Dec 16 12:29:06.366236 waagent[2024]: 2025-12-16T12:29:06.366196Z INFO Daemon Daemon Using waagent for provisioning Dec 16 12:29:06.370129 waagent[2024]: 2025-12-16T12:29:06.370091Z INFO Daemon Daemon Activate resource disk Dec 16 12:29:06.373616 waagent[2024]: 2025-12-16T12:29:06.373583Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 12:29:06.381525 waagent[2024]: 2025-12-16T12:29:06.381489Z INFO Daemon Daemon Found device: None Dec 16 12:29:06.384775 waagent[2024]: 2025-12-16T12:29:06.384745Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 12:29:06.390969 waagent[2024]: 2025-12-16T12:29:06.390939Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 12:29:06.398918 waagent[2024]: 2025-12-16T12:29:06.398876Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:29:06.403058 waagent[2024]: 2025-12-16T12:29:06.403029Z INFO Daemon Daemon Running default provisioning handler Dec 16 12:29:06.412230 waagent[2024]: 2025-12-16T12:29:06.412176Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 12:29:06.422031 waagent[2024]: 2025-12-16T12:29:06.421989Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 12:29:06.428965 waagent[2024]: 2025-12-16T12:29:06.428936Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 12:29:06.432561 waagent[2024]: 2025-12-16T12:29:06.432529Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 12:29:07.215845 waagent[2024]: 2025-12-16T12:29:07.215765Z INFO Daemon Daemon Successfully mounted dvd Dec 16 12:29:07.323387 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 12:29:07.325827 waagent[2024]: 2025-12-16T12:29:07.325763Z INFO Daemon Daemon Detect protocol endpoint Dec 16 12:29:07.329593 waagent[2024]: 2025-12-16T12:29:07.329544Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:29:07.334011 waagent[2024]: 2025-12-16T12:29:07.333978Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 12:29:07.339130 waagent[2024]: 2025-12-16T12:29:07.339103Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 12:29:07.343221 waagent[2024]: 2025-12-16T12:29:07.343184Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 12:29:07.347088 waagent[2024]: 2025-12-16T12:29:07.347060Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 12:29:07.468364 waagent[2024]: 2025-12-16T12:29:07.468252Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 12:29:07.473391 waagent[2024]: 2025-12-16T12:29:07.473367Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 12:29:07.477154 waagent[2024]: 2025-12-16T12:29:07.477128Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 12:29:07.693601 waagent[2024]: 2025-12-16T12:29:07.693014Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 12:29:07.698031 waagent[2024]: 2025-12-16T12:29:07.697985Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 12:29:07.705598 waagent[2024]: 2025-12-16T12:29:07.705542Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:29:07.720888 waagent[2024]: 2025-12-16T12:29:07.720817Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 12:29:07.725414 waagent[2024]: 2025-12-16T12:29:07.725379Z INFO Daemon Dec 16 12:29:07.727623 waagent[2024]: 2025-12-16T12:29:07.727590Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f481026d-13b6-4f72-8f8f-4d603d9c6c3c eTag: 15387492131085250536 source: Fabric] Dec 16 12:29:07.736138 waagent[2024]: 2025-12-16T12:29:07.736105Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 12:29:07.741090 waagent[2024]: 2025-12-16T12:29:07.741051Z INFO Daemon Dec 16 12:29:07.743192 waagent[2024]: 2025-12-16T12:29:07.743163Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:29:07.751828 waagent[2024]: 2025-12-16T12:29:07.751797Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 12:29:07.807525 waagent[2024]: 2025-12-16T12:29:07.807466Z INFO Daemon Downloaded certificate {'thumbprint': 'A21A605E0634B2F513F0C30ADB5CA2673EF17791', 'hasPrivateKey': True} Dec 16 12:29:07.814911 waagent[2024]: 2025-12-16T12:29:07.814872Z INFO Daemon Fetch goal state completed Dec 16 12:29:07.824736 waagent[2024]: 2025-12-16T12:29:07.824687Z INFO Daemon Daemon Starting provisioning Dec 16 12:29:07.828553 waagent[2024]: 2025-12-16T12:29:07.828518Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 12:29:07.832197 waagent[2024]: 2025-12-16T12:29:07.832169Z INFO Daemon Daemon Set hostname [ci-4459.2.2-a-719f16aeb7] Dec 16 12:29:08.509699 waagent[2024]: 2025-12-16T12:29:08.509572Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-a-719f16aeb7] Dec 16 12:29:09.306044 waagent[2024]: 2025-12-16T12:29:09.305960Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 12:29:09.311074 waagent[2024]: 2025-12-16T12:29:09.311020Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 12:29:09.321071 systemd-networkd[1485]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:29:09.321078 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:29:09.321109 systemd-networkd[1485]: eth0: DHCP lease lost Dec 16 12:29:09.321999 waagent[2024]: 2025-12-16T12:29:09.321955Z INFO Daemon Daemon Create user account if not exists Dec 16 12:29:09.326155 waagent[2024]: 2025-12-16T12:29:09.326118Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 12:29:09.330415 waagent[2024]: 2025-12-16T12:29:09.330370Z INFO Daemon Daemon Configure sudoer Dec 16 12:29:09.344589 systemd-networkd[1485]: eth0: DHCPv4 address 10.200.20.4/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:29:11.233718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:11.236336 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:11.251485 waagent[2024]: 2025-12-16T12:29:11.251405Z INFO Daemon Daemon Configure sshd Dec 16 12:29:11.269664 kubelet[2107]: E1216 12:29:11.269616 2107 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:11.272241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:11.272341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:11.272815 systemd[1]: kubelet.service: Consumed 113ms CPU time, 106.6M memory peak. Dec 16 12:29:11.411711 waagent[2024]: 2025-12-16T12:29:11.411639Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 12:29:11.421513 waagent[2024]: 2025-12-16T12:29:11.421463Z INFO Daemon Daemon Deploy ssh public key. Dec 16 12:29:11.515223 waagent[2024]: 2025-12-16T12:29:11.515111Z INFO Daemon Daemon Provisioning complete Dec 16 12:29:11.528736 waagent[2024]: 2025-12-16T12:29:11.528696Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 12:29:11.533358 waagent[2024]: 2025-12-16T12:29:11.533323Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 12:29:11.540902 waagent[2024]: 2025-12-16T12:29:11.540868Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 12:29:11.644491 waagent[2117]: 2025-12-16T12:29:11.644423Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 12:29:11.645615 waagent[2117]: 2025-12-16T12:29:11.644922Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Dec 16 12:29:11.645615 waagent[2117]: 2025-12-16T12:29:11.644981Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 12:29:11.645615 waagent[2117]: 2025-12-16T12:29:11.645017Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 16 12:29:11.684765 waagent[2117]: 2025-12-16T12:29:11.684695Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 12:29:11.684908 waagent[2117]: 2025-12-16T12:29:11.684879Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:29:11.684948 waagent[2117]: 2025-12-16T12:29:11.684930Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:29:11.690858 waagent[2117]: 2025-12-16T12:29:11.690810Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:29:11.696101 waagent[2117]: 2025-12-16T12:29:11.696071Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 12:29:11.696479 waagent[2117]: 2025-12-16T12:29:11.696444Z INFO ExtHandler Dec 16 12:29:11.696534 waagent[2117]: 2025-12-16T12:29:11.696516Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1c9c8ea9-d2ff-4613-8b47-e2e7b8aba689 eTag: 15387492131085250536 source: Fabric] Dec 16 12:29:11.696794 waagent[2117]: 2025-12-16T12:29:11.696764Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 12:29:11.697194 waagent[2117]: 2025-12-16T12:29:11.697163Z INFO ExtHandler Dec 16 12:29:11.697235 waagent[2117]: 2025-12-16T12:29:11.697218Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:29:11.700912 waagent[2117]: 2025-12-16T12:29:11.700879Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 12:29:11.752731 waagent[2117]: 2025-12-16T12:29:11.752663Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A21A605E0634B2F513F0C30ADB5CA2673EF17791', 'hasPrivateKey': True} Dec 16 12:29:11.753121 waagent[2117]: 2025-12-16T12:29:11.753084Z INFO ExtHandler Fetch goal state completed Dec 16 12:29:11.766509 waagent[2117]: 2025-12-16T12:29:11.766413Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Dec 16 12:29:11.769841 waagent[2117]: 2025-12-16T12:29:11.769791Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2117 Dec 16 12:29:11.769941 waagent[2117]: 2025-12-16T12:29:11.769915Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 12:29:11.770175 waagent[2117]: 2025-12-16T12:29:11.770148Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 12:29:11.771292 waagent[2117]: 2025-12-16T12:29:11.771254Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 12:29:11.771662 waagent[2117]: 2025-12-16T12:29:11.771627Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 12:29:11.771794 waagent[2117]: 2025-12-16T12:29:11.771769Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 12:29:11.772217 waagent[2117]: 2025-12-16T12:29:11.772184Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 12:29:11.803324 waagent[2117]: 2025-12-16T12:29:11.803286Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 12:29:11.803489 waagent[2117]: 2025-12-16T12:29:11.803459Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 12:29:11.808034 waagent[2117]: 2025-12-16T12:29:11.808008Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 12:29:11.812896 systemd[1]: Reload requested from client PID 2132 ('systemctl') (unit waagent.service)... Dec 16 12:29:11.812910 systemd[1]: Reloading... Dec 16 12:29:11.869698 zram_generator::config[2171]: No configuration found. Dec 16 12:29:12.031354 systemd[1]: Reloading finished in 218 ms. Dec 16 12:29:12.049251 waagent[2117]: 2025-12-16T12:29:12.048608Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 12:29:12.049251 waagent[2117]: 2025-12-16T12:29:12.048744Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 12:29:12.219765 waagent[2117]: 2025-12-16T12:29:12.219698Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 12:29:12.220196 waagent[2117]: 2025-12-16T12:29:12.220158Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 12:29:12.220988 waagent[2117]: 2025-12-16T12:29:12.220946Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 12:29:12.221095 waagent[2117]: 2025-12-16T12:29:12.221056Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:29:12.221158 waagent[2117]: 2025-12-16T12:29:12.221136Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:29:12.221325 waagent[2117]: 2025-12-16T12:29:12.221296Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 12:29:12.221712 waagent[2117]: 2025-12-16T12:29:12.221672Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 12:29:12.221712 waagent[2117]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 12:29:12.221712 waagent[2117]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 12:29:12.221712 waagent[2117]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 12:29:12.221712 waagent[2117]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:29:12.221712 waagent[2117]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:29:12.221712 waagent[2117]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:29:12.222118 waagent[2117]: 2025-12-16T12:29:12.222047Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 12:29:12.222274 waagent[2117]: 2025-12-16T12:29:12.222156Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:29:12.222274 waagent[2117]: 2025-12-16T12:29:12.222220Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:29:12.222324 waagent[2117]: 2025-12-16T12:29:12.222306Z INFO EnvHandler ExtHandler Configure routes Dec 16 12:29:12.222363 waagent[2117]: 2025-12-16T12:29:12.222343Z INFO EnvHandler ExtHandler Gateway:None Dec 16 12:29:12.222388 waagent[2117]: 2025-12-16T12:29:12.222375Z INFO EnvHandler ExtHandler Routes:None Dec 16 12:29:12.222833 waagent[2117]: 2025-12-16T12:29:12.222731Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 12:29:12.222833 waagent[2117]: 2025-12-16T12:29:12.222773Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 12:29:12.223268 waagent[2117]: 2025-12-16T12:29:12.223236Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 12:29:12.223387 waagent[2117]: 2025-12-16T12:29:12.223343Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 12:29:12.223511 waagent[2117]: 2025-12-16T12:29:12.223486Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 12:29:12.232594 waagent[2117]: 2025-12-16T12:29:12.231449Z INFO ExtHandler ExtHandler Dec 16 12:29:12.232594 waagent[2117]: 2025-12-16T12:29:12.231512Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3823a864-b2db-4e34-afd6-2e3d01de8392 correlation 7e3d5ebc-33b1-4cdd-a9e4-312c5d2ca8f1 created: 2025-12-16T12:28:01.687325Z] Dec 16 12:29:12.232594 waagent[2117]: 2025-12-16T12:29:12.231789Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 12:29:12.232594 waagent[2117]: 2025-12-16T12:29:12.232185Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 12:29:12.254891 waagent[2117]: 2025-12-16T12:29:12.254841Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 12:29:12.254891 waagent[2117]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 12:29:12.255179 waagent[2117]: 2025-12-16T12:29:12.255145Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F2C95122-87F4-49BD-83C3-A51EE1F1C3FE;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 12:29:12.286545 waagent[2117]: 2025-12-16T12:29:12.286173Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 12:29:12.286545 waagent[2117]: Executing ['ip', '-a', '-o', 'link']: Dec 16 12:29:12.286545 waagent[2117]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 12:29:12.286545 waagent[2117]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c3:68:8e brd ff:ff:ff:ff:ff:ff Dec 16 12:29:12.286545 waagent[2117]: 3: enP28199s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:c3:68:8e brd ff:ff:ff:ff:ff:ff\ altname enP28199p0s2 Dec 16 12:29:12.286545 waagent[2117]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 12:29:12.286545 waagent[2117]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 12:29:12.286545 waagent[2117]: 2: eth0 inet 10.200.20.4/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 12:29:12.286545 waagent[2117]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 12:29:12.286545 waagent[2117]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 12:29:12.286545 waagent[2117]: 2: eth0 inet6 fe80::20d:3aff:fec3:688e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 12:29:12.312608 waagent[2117]: 2025-12-16T12:29:12.312538Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 12:29:12.312608 waagent[2117]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:29:12.312608 waagent[2117]: pkts bytes target prot opt in out source destination Dec 16 12:29:12.312608 waagent[2117]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:29:12.312608 waagent[2117]: pkts bytes target prot opt in out source destination Dec 16 12:29:12.312608 waagent[2117]: Chain OUTPUT (policy ACCEPT 1 packets, 52 bytes) Dec 16 12:29:12.312608 waagent[2117]: pkts bytes target prot opt in out source destination Dec 16 12:29:12.312608 waagent[2117]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:29:12.312608 waagent[2117]: 4 416 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:29:12.312608 waagent[2117]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:29:12.315486 waagent[2117]: 2025-12-16T12:29:12.315438Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 12:29:12.315486 waagent[2117]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:29:12.315486 waagent[2117]: pkts bytes target prot opt in out source destination Dec 16 12:29:12.315486 waagent[2117]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:29:12.315486 waagent[2117]: pkts bytes target prot opt in out source destination Dec 16 12:29:12.315486 waagent[2117]: Chain OUTPUT (policy ACCEPT 1 packets, 52 bytes) Dec 16 12:29:12.315486 waagent[2117]: pkts bytes target prot opt in out source destination Dec 16 12:29:12.315486 waagent[2117]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:29:12.315486 waagent[2117]: 11 1356 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:29:12.315486 waagent[2117]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:29:12.315695 waagent[2117]: 2025-12-16T12:29:12.315667Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 16 12:29:17.858954 chronyd[1837]: Selected source PHC0 Dec 16 12:29:17.888005 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:29:17.889198 systemd[1]: Started sshd@0-10.200.20.4:22-10.200.16.10:36626.service - OpenSSH per-connection server daemon (10.200.16.10:36626). Dec 16 12:29:18.469009 sshd[2262]: Accepted publickey for core from 10.200.16.10 port 36626 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:18.470090 sshd-session[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:18.473773 systemd-logind[1858]: New session 3 of user core. Dec 16 12:29:18.489682 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:29:18.908192 systemd[1]: Started sshd@1-10.200.20.4:22-10.200.16.10:36640.service - OpenSSH per-connection server daemon (10.200.16.10:36640). Dec 16 12:29:19.395873 sshd[2268]: Accepted publickey for core from 10.200.16.10 port 36640 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:19.396915 sshd-session[2268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:19.400465 systemd-logind[1858]: New session 4 of user core. Dec 16 12:29:19.408857 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:29:19.745825 sshd[2271]: Connection closed by 10.200.16.10 port 36640 Dec 16 12:29:19.746332 sshd-session[2268]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:19.749486 systemd[1]: sshd@1-10.200.20.4:22-10.200.16.10:36640.service: Deactivated successfully. Dec 16 12:29:19.750892 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:29:19.751508 systemd-logind[1858]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:29:19.752983 systemd-logind[1858]: Removed session 4. Dec 16 12:29:19.826076 systemd[1]: Started sshd@2-10.200.20.4:22-10.200.16.10:36648.service - OpenSSH per-connection server daemon (10.200.16.10:36648). Dec 16 12:29:20.274017 sshd[2277]: Accepted publickey for core from 10.200.16.10 port 36648 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:20.275077 sshd-session[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:20.278826 systemd-logind[1858]: New session 5 of user core. Dec 16 12:29:20.286680 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:29:20.601615 sshd[2280]: Connection closed by 10.200.16.10 port 36648 Dec 16 12:29:20.602209 sshd-session[2277]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:20.605252 systemd[1]: sshd@2-10.200.20.4:22-10.200.16.10:36648.service: Deactivated successfully. Dec 16 12:29:20.606789 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:29:20.607427 systemd-logind[1858]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:29:20.609767 systemd-logind[1858]: Removed session 5. Dec 16 12:29:20.687257 systemd[1]: Started sshd@3-10.200.20.4:22-10.200.16.10:44266.service - OpenSSH per-connection server daemon (10.200.16.10:44266). Dec 16 12:29:21.139209 sshd[2286]: Accepted publickey for core from 10.200.16.10 port 44266 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:21.140242 sshd-session[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:21.143967 systemd-logind[1858]: New session 6 of user core. Dec 16 12:29:21.151692 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:29:21.275735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:29:21.277014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:21.471298 sshd[2289]: Connection closed by 10.200.16.10 port 44266 Dec 16 12:29:21.471132 sshd-session[2286]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:21.474061 systemd[1]: sshd@3-10.200.20.4:22-10.200.16.10:44266.service: Deactivated successfully. Dec 16 12:29:21.476014 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:29:21.477240 systemd-logind[1858]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:29:21.479177 systemd-logind[1858]: Removed session 6. Dec 16 12:29:21.560003 systemd[1]: Started sshd@4-10.200.20.4:22-10.200.16.10:44270.service - OpenSSH per-connection server daemon (10.200.16.10:44270). Dec 16 12:29:21.642394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:21.645026 (kubelet)[2306]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:21.671072 kubelet[2306]: E1216 12:29:21.671016 2306 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:21.673174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:21.673283 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:21.673746 systemd[1]: kubelet.service: Consumed 104ms CPU time, 106.8M memory peak. Dec 16 12:29:22.042393 sshd[2298]: Accepted publickey for core from 10.200.16.10 port 44270 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:22.043464 sshd-session[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:22.047329 systemd-logind[1858]: New session 7 of user core. Dec 16 12:29:22.065975 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:29:22.424598 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:29:22.424821 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:22.451880 sudo[2313]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:22.528773 sshd[2312]: Connection closed by 10.200.16.10 port 44270 Dec 16 12:29:22.529673 sshd-session[2298]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:22.533093 systemd-logind[1858]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:29:22.533554 systemd[1]: sshd@4-10.200.20.4:22-10.200.16.10:44270.service: Deactivated successfully. Dec 16 12:29:22.535261 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:29:22.537180 systemd-logind[1858]: Removed session 7. Dec 16 12:29:22.623770 systemd[1]: Started sshd@5-10.200.20.4:22-10.200.16.10:44286.service - OpenSSH per-connection server daemon (10.200.16.10:44286). Dec 16 12:29:23.111257 sshd[2319]: Accepted publickey for core from 10.200.16.10 port 44286 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:23.114018 sshd-session[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:23.117693 systemd-logind[1858]: New session 8 of user core. Dec 16 12:29:23.127863 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:29:23.385285 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:29:23.385677 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:23.392549 sudo[2324]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:23.396162 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:29:23.396367 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:23.404342 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:29:23.436405 augenrules[2346]: No rules Dec 16 12:29:23.437591 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:29:23.437766 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:29:23.438701 sudo[2323]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:23.515377 sshd[2322]: Connection closed by 10.200.16.10 port 44286 Dec 16 12:29:23.515900 sshd-session[2319]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:23.519836 systemd[1]: sshd@5-10.200.20.4:22-10.200.16.10:44286.service: Deactivated successfully. Dec 16 12:29:23.521330 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:29:23.521945 systemd-logind[1858]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:29:23.523258 systemd-logind[1858]: Removed session 8. Dec 16 12:29:23.595965 systemd[1]: Started sshd@6-10.200.20.4:22-10.200.16.10:44290.service - OpenSSH per-connection server daemon (10.200.16.10:44290). Dec 16 12:29:24.045836 sshd[2355]: Accepted publickey for core from 10.200.16.10 port 44290 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:24.046892 sshd-session[2355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:24.050347 systemd-logind[1858]: New session 9 of user core. Dec 16 12:29:24.060687 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:29:24.299944 sudo[2359]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:29:24.300151 sudo[2359]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:25.461435 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:29:25.475841 (dockerd)[2377]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:29:26.250589 dockerd[2377]: time="2025-12-16T12:29:26.250174580Z" level=info msg="Starting up" Dec 16 12:29:26.252129 dockerd[2377]: time="2025-12-16T12:29:26.252100988Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:29:26.259884 dockerd[2377]: time="2025-12-16T12:29:26.259843443Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:29:26.382170 dockerd[2377]: time="2025-12-16T12:29:26.382092510Z" level=info msg="Loading containers: start." Dec 16 12:29:26.408577 kernel: Initializing XFRM netlink socket Dec 16 12:29:26.724700 systemd-networkd[1485]: docker0: Link UP Dec 16 12:29:26.743319 dockerd[2377]: time="2025-12-16T12:29:26.743230874Z" level=info msg="Loading containers: done." Dec 16 12:29:26.753869 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck131106089-merged.mount: Deactivated successfully. Dec 16 12:29:26.764345 dockerd[2377]: time="2025-12-16T12:29:26.764071976Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:29:26.764345 dockerd[2377]: time="2025-12-16T12:29:26.764138046Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:29:26.764345 dockerd[2377]: time="2025-12-16T12:29:26.764205988Z" level=info msg="Initializing buildkit" Dec 16 12:29:26.813458 dockerd[2377]: time="2025-12-16T12:29:26.813419837Z" level=info msg="Completed buildkit initialization" Dec 16 12:29:26.818467 dockerd[2377]: time="2025-12-16T12:29:26.818435006Z" level=info msg="Daemon has completed initialization" Dec 16 12:29:26.818649 dockerd[2377]: time="2025-12-16T12:29:26.818610611Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:29:26.818749 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:29:27.420319 containerd[1921]: time="2025-12-16T12:29:27.420272634Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 16 12:29:28.192133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819538498.mount: Deactivated successfully. Dec 16 12:29:29.238599 containerd[1921]: time="2025-12-16T12:29:29.238098331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:29.242093 containerd[1921]: time="2025-12-16T12:29:29.242067207Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Dec 16 12:29:29.247713 containerd[1921]: time="2025-12-16T12:29:29.247688019Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:29.252722 containerd[1921]: time="2025-12-16T12:29:29.252107306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:29.252722 containerd[1921]: time="2025-12-16T12:29:29.252586014Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.832267034s" Dec 16 12:29:29.252722 containerd[1921]: time="2025-12-16T12:29:29.252612878Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 16 12:29:29.253164 containerd[1921]: time="2025-12-16T12:29:29.253147059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 16 12:29:30.371709 containerd[1921]: time="2025-12-16T12:29:30.371643224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:30.375236 containerd[1921]: time="2025-12-16T12:29:30.375052488Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Dec 16 12:29:30.378218 containerd[1921]: time="2025-12-16T12:29:30.378191289Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:30.382606 containerd[1921]: time="2025-12-16T12:29:30.382580000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:30.383125 containerd[1921]: time="2025-12-16T12:29:30.383100924Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.12982303s" Dec 16 12:29:30.383220 containerd[1921]: time="2025-12-16T12:29:30.383206687Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 16 12:29:30.383700 containerd[1921]: time="2025-12-16T12:29:30.383673786Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 16 12:29:31.264955 containerd[1921]: time="2025-12-16T12:29:31.264898614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:31.268469 containerd[1921]: time="2025-12-16T12:29:31.268436305Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Dec 16 12:29:31.271973 containerd[1921]: time="2025-12-16T12:29:31.271930395Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:31.277023 containerd[1921]: time="2025-12-16T12:29:31.276568495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:31.277187 containerd[1921]: time="2025-12-16T12:29:31.277165645Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 893.461939ms" Dec 16 12:29:31.277265 containerd[1921]: time="2025-12-16T12:29:31.277250871Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 16 12:29:31.277762 containerd[1921]: time="2025-12-16T12:29:31.277734939Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 16 12:29:31.775588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:29:31.776782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:31.876656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:31.882871 (kubelet)[2657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:31.914016 kubelet[2657]: E1216 12:29:31.913969 2657 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:31.916540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:31.916900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:31.918640 systemd[1]: kubelet.service: Consumed 106ms CPU time, 106.9M memory peak. Dec 16 12:29:32.704713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536205652.mount: Deactivated successfully. Dec 16 12:29:33.530639 containerd[1921]: time="2025-12-16T12:29:33.530587311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:33.533830 containerd[1921]: time="2025-12-16T12:29:33.533802763Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Dec 16 12:29:33.536850 containerd[1921]: time="2025-12-16T12:29:33.536803937Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:33.542593 containerd[1921]: time="2025-12-16T12:29:33.542184863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:33.542702 containerd[1921]: time="2025-12-16T12:29:33.542679682Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 2.264793596s" Dec 16 12:29:33.542763 containerd[1921]: time="2025-12-16T12:29:33.542750036Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 16 12:29:33.543287 containerd[1921]: time="2025-12-16T12:29:33.543258680Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 16 12:29:34.301938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3030624184.mount: Deactivated successfully. Dec 16 12:29:35.275128 containerd[1921]: time="2025-12-16T12:29:35.275067789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:35.278349 containerd[1921]: time="2025-12-16T12:29:35.278321871Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Dec 16 12:29:35.281921 containerd[1921]: time="2025-12-16T12:29:35.281879163Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:35.286342 containerd[1921]: time="2025-12-16T12:29:35.286295374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:35.287033 containerd[1921]: time="2025-12-16T12:29:35.286920603Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.743629466s" Dec 16 12:29:35.287033 containerd[1921]: time="2025-12-16T12:29:35.286949148Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 16 12:29:35.287548 containerd[1921]: time="2025-12-16T12:29:35.287521352Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 16 12:29:35.831591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761695467.mount: Deactivated successfully. Dec 16 12:29:35.852762 containerd[1921]: time="2025-12-16T12:29:35.852711277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:35.856019 containerd[1921]: time="2025-12-16T12:29:35.855859555Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Dec 16 12:29:35.859309 containerd[1921]: time="2025-12-16T12:29:35.859284611Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:35.864266 containerd[1921]: time="2025-12-16T12:29:35.863639291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:35.864950 containerd[1921]: time="2025-12-16T12:29:35.864915967Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 577.36311ms" Dec 16 12:29:35.865050 containerd[1921]: time="2025-12-16T12:29:35.865034811Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 16 12:29:35.867054 containerd[1921]: time="2025-12-16T12:29:35.867014425Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 16 12:29:35.919580 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 16 12:29:36.501178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651812846.mount: Deactivated successfully. Dec 16 12:29:38.982074 containerd[1921]: time="2025-12-16T12:29:38.982013881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:38.986493 containerd[1921]: time="2025-12-16T12:29:38.986046646Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Dec 16 12:29:38.989833 containerd[1921]: time="2025-12-16T12:29:38.989797441Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:39.712702 containerd[1921]: time="2025-12-16T12:29:39.712621140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:39.714621 containerd[1921]: time="2025-12-16T12:29:39.713130910Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.845931679s" Dec 16 12:29:39.714621 containerd[1921]: time="2025-12-16T12:29:39.713389879Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 16 12:29:40.250522 update_engine[1861]: I20251216 12:29:40.250437 1861 update_attempter.cc:509] Updating boot flags... Dec 16 12:29:42.026130 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 12:29:42.030727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:42.227762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:42.232788 (kubelet)[2928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:42.259546 kubelet[2928]: E1216 12:29:42.259505 2928 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:42.261436 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:42.261571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:42.262061 systemd[1]: kubelet.service: Consumed 102ms CPU time, 106.1M memory peak. Dec 16 12:29:42.577809 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:42.578086 systemd[1]: kubelet.service: Consumed 102ms CPU time, 106.1M memory peak. Dec 16 12:29:42.580759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:42.600877 systemd[1]: Reload requested from client PID 2942 ('systemctl') (unit session-9.scope)... Dec 16 12:29:42.600888 systemd[1]: Reloading... Dec 16 12:29:42.695763 zram_generator::config[2989]: No configuration found. Dec 16 12:29:42.851411 systemd[1]: Reloading finished in 250 ms. Dec 16 12:29:42.907381 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:29:42.907439 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:29:42.908598 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:42.908639 systemd[1]: kubelet.service: Consumed 75ms CPU time, 95.2M memory peak. Dec 16 12:29:42.911715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:43.135509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:43.140941 (kubelet)[3056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:29:43.213439 kubelet[3056]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:29:43.213918 kubelet[3056]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:29:43.215548 kubelet[3056]: I1216 12:29:43.214479 3056 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:29:43.391930 kubelet[3056]: I1216 12:29:43.391684 3056 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:29:43.393529 kubelet[3056]: I1216 12:29:43.393509 3056 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:29:43.394871 kubelet[3056]: I1216 12:29:43.394852 3056 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:29:43.394957 kubelet[3056]: I1216 12:29:43.394947 3056 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:29:43.395224 kubelet[3056]: I1216 12:29:43.395205 3056 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:29:43.669133 kubelet[3056]: E1216 12:29:43.668898 3056 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:29:43.670336 kubelet[3056]: I1216 12:29:43.670302 3056 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:29:43.673396 kubelet[3056]: I1216 12:29:43.673380 3056 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:29:43.676053 kubelet[3056]: I1216 12:29:43.676036 3056 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:29:43.676306 kubelet[3056]: I1216 12:29:43.676286 3056 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:29:43.676498 kubelet[3056]: I1216 12:29:43.676377 3056 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-719f16aeb7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:29:43.676659 kubelet[3056]: I1216 12:29:43.676646 3056 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:29:43.676709 kubelet[3056]: I1216 12:29:43.676702 3056 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:29:43.676848 kubelet[3056]: I1216 12:29:43.676837 3056 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:29:43.682908 kubelet[3056]: I1216 12:29:43.682887 3056 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:29:43.684023 kubelet[3056]: I1216 12:29:43.684008 3056 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:29:43.684509 kubelet[3056]: I1216 12:29:43.684496 3056 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:29:43.684703 kubelet[3056]: E1216 12:29:43.684456 3056 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-719f16aeb7&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:29:43.685203 kubelet[3056]: I1216 12:29:43.685189 3056 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:29:43.686642 kubelet[3056]: I1216 12:29:43.686618 3056 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:29:43.687435 kubelet[3056]: E1216 12:29:43.687406 3056 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:29:43.687640 kubelet[3056]: I1216 12:29:43.687620 3056 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:29:43.687993 kubelet[3056]: I1216 12:29:43.687977 3056 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:29:43.688018 kubelet[3056]: I1216 12:29:43.688000 3056 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:29:43.688035 kubelet[3056]: W1216 12:29:43.688031 3056 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:29:43.690269 kubelet[3056]: I1216 12:29:43.690246 3056 server.go:1262] "Started kubelet" Dec 16 12:29:43.690701 kubelet[3056]: I1216 12:29:43.690677 3056 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:29:43.691940 kubelet[3056]: I1216 12:29:43.691921 3056 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:29:43.693371 kubelet[3056]: I1216 12:29:43.693320 3056 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:29:43.693432 kubelet[3056]: I1216 12:29:43.693388 3056 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:29:43.694288 kubelet[3056]: I1216 12:29:43.693644 3056 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:29:43.694588 kubelet[3056]: E1216 12:29:43.693752 3056 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.4:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.4:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-719f16aeb7.1881b1f26dea3a7c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-719f16aeb7,UID:ci-4459.2.2-a-719f16aeb7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-719f16aeb7,},FirstTimestamp:2025-12-16 12:29:43.690214012 +0000 UTC m=+0.546749022,LastTimestamp:2025-12-16 12:29:43.690214012 +0000 UTC m=+0.546749022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-719f16aeb7,}" Dec 16 12:29:43.697581 kubelet[3056]: I1216 12:29:43.696502 3056 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:29:43.697581 kubelet[3056]: I1216 12:29:43.696804 3056 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:29:43.698350 kubelet[3056]: E1216 12:29:43.698332 3056 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" Dec 16 12:29:43.698436 kubelet[3056]: I1216 12:29:43.698428 3056 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:29:43.698655 kubelet[3056]: I1216 12:29:43.698640 3056 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:29:43.698769 kubelet[3056]: I1216 12:29:43.698760 3056 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:29:43.699084 kubelet[3056]: E1216 12:29:43.699061 3056 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:29:43.700285 kubelet[3056]: I1216 12:29:43.700268 3056 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:29:43.700369 kubelet[3056]: I1216 12:29:43.700361 3056 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:29:43.700468 kubelet[3056]: I1216 12:29:43.700453 3056 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:29:43.723087 kubelet[3056]: E1216 12:29:43.723045 3056 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-719f16aeb7?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="200ms" Dec 16 12:29:43.724547 kubelet[3056]: E1216 12:29:43.724519 3056 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:29:43.731052 kubelet[3056]: I1216 12:29:43.731032 3056 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:29:43.731052 kubelet[3056]: I1216 12:29:43.731044 3056 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:29:43.731052 kubelet[3056]: I1216 12:29:43.731059 3056 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:29:43.735146 kubelet[3056]: I1216 12:29:43.734600 3056 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:29:43.736188 kubelet[3056]: I1216 12:29:43.736172 3056 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:29:43.736282 kubelet[3056]: I1216 12:29:43.736273 3056 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:29:43.736352 kubelet[3056]: I1216 12:29:43.736345 3056 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:29:43.736450 kubelet[3056]: E1216 12:29:43.736437 3056 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:29:43.737674 kubelet[3056]: I1216 12:29:43.737659 3056 policy_none.go:49] "None policy: Start" Dec 16 12:29:43.738290 kubelet[3056]: I1216 12:29:43.738273 3056 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:29:43.738444 kubelet[3056]: I1216 12:29:43.738433 3056 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:29:43.739155 kubelet[3056]: E1216 12:29:43.739130 3056 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:29:43.743826 kubelet[3056]: I1216 12:29:43.743809 3056 policy_none.go:47] "Start" Dec 16 12:29:43.747650 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:29:43.761406 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:29:43.764378 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:29:43.775264 kubelet[3056]: E1216 12:29:43.775242 3056 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:29:43.775543 kubelet[3056]: I1216 12:29:43.775523 3056 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:29:43.775662 kubelet[3056]: I1216 12:29:43.775631 3056 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:29:43.776465 kubelet[3056]: I1216 12:29:43.776418 3056 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:29:43.777187 kubelet[3056]: E1216 12:29:43.777121 3056 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:29:43.777187 kubelet[3056]: E1216 12:29:43.777154 3056 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-a-719f16aeb7\" not found" Dec 16 12:29:43.850929 systemd[1]: Created slice kubepods-burstable-podbdc1ef30c75fa5ccabd9c83da47f9489.slice - libcontainer container kubepods-burstable-podbdc1ef30c75fa5ccabd9c83da47f9489.slice. Dec 16 12:29:43.856163 kubelet[3056]: E1216 12:29:43.856123 3056 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.859694 systemd[1]: Created slice kubepods-burstable-pod84ec079b9e3efee9656ee1c015350147.slice - libcontainer container kubepods-burstable-pod84ec079b9e3efee9656ee1c015350147.slice. Dec 16 12:29:43.861635 kubelet[3056]: E1216 12:29:43.861612 3056 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.871779 systemd[1]: Created slice kubepods-burstable-podabaffc9b3ef0745c4ba081fc7cb78110.slice - libcontainer container kubepods-burstable-podabaffc9b3ef0745c4ba081fc7cb78110.slice. Dec 16 12:29:43.873338 kubelet[3056]: E1216 12:29:43.873317 3056 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.877523 kubelet[3056]: I1216 12:29:43.877499 3056 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.877860 kubelet[3056]: E1216 12:29:43.877837 3056 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.900122 kubelet[3056]: I1216 12:29:43.900053 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84ec079b9e3efee9656ee1c015350147-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-719f16aeb7\" (UID: \"84ec079b9e3efee9656ee1c015350147\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.900122 kubelet[3056]: I1216 12:29:43.900082 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bdc1ef30c75fa5ccabd9c83da47f9489-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-719f16aeb7\" (UID: \"bdc1ef30c75fa5ccabd9c83da47f9489\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.900122 kubelet[3056]: I1216 12:29:43.900094 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bdc1ef30c75fa5ccabd9c83da47f9489-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-719f16aeb7\" (UID: \"bdc1ef30c75fa5ccabd9c83da47f9489\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.900122 kubelet[3056]: I1216 12:29:43.900103 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abaffc9b3ef0745c4ba081fc7cb78110-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-719f16aeb7\" (UID: \"abaffc9b3ef0745c4ba081fc7cb78110\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.900431 kubelet[3056]: I1216 12:29:43.900334 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bdc1ef30c75fa5ccabd9c83da47f9489-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-719f16aeb7\" (UID: \"bdc1ef30c75fa5ccabd9c83da47f9489\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.900431 kubelet[3056]: I1216 12:29:43.900379 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84ec079b9e3efee9656ee1c015350147-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-719f16aeb7\" (UID: \"84ec079b9e3efee9656ee1c015350147\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.900431 kubelet[3056]: I1216 12:29:43.900390 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84ec079b9e3efee9656ee1c015350147-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-719f16aeb7\" (UID: \"84ec079b9e3efee9656ee1c015350147\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.900431 kubelet[3056]: I1216 12:29:43.900400 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84ec079b9e3efee9656ee1c015350147-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-719f16aeb7\" (UID: \"84ec079b9e3efee9656ee1c015350147\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.900431 kubelet[3056]: I1216 12:29:43.900414 3056 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84ec079b9e3efee9656ee1c015350147-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-719f16aeb7\" (UID: \"84ec079b9e3efee9656ee1c015350147\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:43.924551 kubelet[3056]: E1216 12:29:43.923546 3056 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-719f16aeb7?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="400ms" Dec 16 12:29:44.079789 kubelet[3056]: I1216 12:29:44.079541 3056 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:44.080064 kubelet[3056]: E1216 12:29:44.080044 3056 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:44.163676 containerd[1921]: time="2025-12-16T12:29:44.163121116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-719f16aeb7,Uid:bdc1ef30c75fa5ccabd9c83da47f9489,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:44.167963 containerd[1921]: time="2025-12-16T12:29:44.167759949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-719f16aeb7,Uid:84ec079b9e3efee9656ee1c015350147,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:44.179710 containerd[1921]: time="2025-12-16T12:29:44.179628079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-719f16aeb7,Uid:abaffc9b3ef0745c4ba081fc7cb78110,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:44.324679 kubelet[3056]: E1216 12:29:44.324627 3056 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-719f16aeb7?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="800ms" Dec 16 12:29:44.481910 kubelet[3056]: I1216 12:29:44.481775 3056 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:44.482644 kubelet[3056]: E1216 12:29:44.482615 3056 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:44.506354 kubelet[3056]: E1216 12:29:44.506321 3056 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:29:44.622156 kubelet[3056]: E1216 12:29:44.622114 3056 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:29:44.868525 kubelet[3056]: E1216 12:29:44.868460 3056 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-719f16aeb7&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:29:45.058163 kubelet[3056]: E1216 12:29:45.058110 3056 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.4:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:29:45.125436 kubelet[3056]: E1216 12:29:45.125321 3056 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-719f16aeb7?timeout=10s\": dial tcp 10.200.20.4:6443: connect: connection refused" interval="1.6s" Dec 16 12:29:45.284320 kubelet[3056]: I1216 12:29:45.284282 3056 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:45.284670 kubelet[3056]: E1216 12:29:45.284634 3056 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.4:6443/api/v1/nodes\": dial tcp 10.200.20.4:6443: connect: connection refused" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:45.420018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525317128.mount: Deactivated successfully. Dec 16 12:29:45.443738 containerd[1921]: time="2025-12-16T12:29:45.443686595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:29:45.451883 containerd[1921]: time="2025-12-16T12:29:45.451840034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Dec 16 12:29:45.462614 containerd[1921]: time="2025-12-16T12:29:45.462107259Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:29:45.465251 containerd[1921]: time="2025-12-16T12:29:45.465212028Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:29:45.469568 containerd[1921]: time="2025-12-16T12:29:45.469535827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:29:45.477667 containerd[1921]: time="2025-12-16T12:29:45.477635417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:29:45.478113 containerd[1921]: time="2025-12-16T12:29:45.478085367Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.310302338s" Dec 16 12:29:45.480576 containerd[1921]: time="2025-12-16T12:29:45.480364894Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:29:45.487366 containerd[1921]: time="2025-12-16T12:29:45.487339120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 16 12:29:45.494790 containerd[1921]: time="2025-12-16T12:29:45.494762496Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.31132833s" Dec 16 12:29:45.515217 containerd[1921]: time="2025-12-16T12:29:45.514963624Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.338908793s" Dec 16 12:29:45.531216 containerd[1921]: time="2025-12-16T12:29:45.531157074Z" level=info msg="connecting to shim c3f967f0c8ca5ee9c337e7e5ef04e038470d7ffb12f168233b4c700ba2c7537f" address="unix:///run/containerd/s/6268c691809e9e00dd100faa53de9b341e75d9cd5a8a620930408680909bf806" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:45.552711 systemd[1]: Started cri-containerd-c3f967f0c8ca5ee9c337e7e5ef04e038470d7ffb12f168233b4c700ba2c7537f.scope - libcontainer container c3f967f0c8ca5ee9c337e7e5ef04e038470d7ffb12f168233b4c700ba2c7537f. Dec 16 12:29:45.569283 containerd[1921]: time="2025-12-16T12:29:45.569102724Z" level=info msg="connecting to shim e53ec0e3fb52f68b1d26cf701cee7493e15f46bc530f0a9d0f016046cf9c5bfa" address="unix:///run/containerd/s/5b2dc14fb594a52a1fd9df3802d87b76c109fb355f1cf95832ae467327a6b595" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:45.594008 containerd[1921]: time="2025-12-16T12:29:45.593820865Z" level=info msg="connecting to shim 09461c245a80620ed96fd2a8bbcfb24a518c1ccabc9acb76a01445460bceddea" address="unix:///run/containerd/s/f6de98bb3fd0656339402b8b5fec087c72c7bc256c0fabab37bbe19400c29478" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:45.594799 systemd[1]: Started cri-containerd-e53ec0e3fb52f68b1d26cf701cee7493e15f46bc530f0a9d0f016046cf9c5bfa.scope - libcontainer container e53ec0e3fb52f68b1d26cf701cee7493e15f46bc530f0a9d0f016046cf9c5bfa. Dec 16 12:29:45.613735 containerd[1921]: time="2025-12-16T12:29:45.613671253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-719f16aeb7,Uid:bdc1ef30c75fa5ccabd9c83da47f9489,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3f967f0c8ca5ee9c337e7e5ef04e038470d7ffb12f168233b4c700ba2c7537f\"" Dec 16 12:29:45.625751 systemd[1]: Started cri-containerd-09461c245a80620ed96fd2a8bbcfb24a518c1ccabc9acb76a01445460bceddea.scope - libcontainer container 09461c245a80620ed96fd2a8bbcfb24a518c1ccabc9acb76a01445460bceddea. Dec 16 12:29:45.626924 containerd[1921]: time="2025-12-16T12:29:45.626880338Z" level=info msg="CreateContainer within sandbox \"c3f967f0c8ca5ee9c337e7e5ef04e038470d7ffb12f168233b4c700ba2c7537f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:29:45.649461 containerd[1921]: time="2025-12-16T12:29:45.649317936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-719f16aeb7,Uid:abaffc9b3ef0745c4ba081fc7cb78110,Namespace:kube-system,Attempt:0,} returns sandbox id \"e53ec0e3fb52f68b1d26cf701cee7493e15f46bc530f0a9d0f016046cf9c5bfa\"" Dec 16 12:29:45.660851 containerd[1921]: time="2025-12-16T12:29:45.660206300Z" level=info msg="CreateContainer within sandbox \"e53ec0e3fb52f68b1d26cf701cee7493e15f46bc530f0a9d0f016046cf9c5bfa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:29:45.660851 containerd[1921]: time="2025-12-16T12:29:45.660487397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-719f16aeb7,Uid:84ec079b9e3efee9656ee1c015350147,Namespace:kube-system,Attempt:0,} returns sandbox id \"09461c245a80620ed96fd2a8bbcfb24a518c1ccabc9acb76a01445460bceddea\"" Dec 16 12:29:45.664919 containerd[1921]: time="2025-12-16T12:29:45.664895423Z" level=info msg="Container 19511bbac943b54e8a58a911f4354b3e7010e4edf2e9538f0e3932b9da75e786: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:45.669172 containerd[1921]: time="2025-12-16T12:29:45.669149092Z" level=info msg="CreateContainer within sandbox \"09461c245a80620ed96fd2a8bbcfb24a518c1ccabc9acb76a01445460bceddea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:29:45.684541 containerd[1921]: time="2025-12-16T12:29:45.684447930Z" level=info msg="CreateContainer within sandbox \"c3f967f0c8ca5ee9c337e7e5ef04e038470d7ffb12f168233b4c700ba2c7537f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"19511bbac943b54e8a58a911f4354b3e7010e4edf2e9538f0e3932b9da75e786\"" Dec 16 12:29:45.685170 containerd[1921]: time="2025-12-16T12:29:45.685144000Z" level=info msg="StartContainer for \"19511bbac943b54e8a58a911f4354b3e7010e4edf2e9538f0e3932b9da75e786\"" Dec 16 12:29:45.686135 containerd[1921]: time="2025-12-16T12:29:45.686106718Z" level=info msg="connecting to shim 19511bbac943b54e8a58a911f4354b3e7010e4edf2e9538f0e3932b9da75e786" address="unix:///run/containerd/s/6268c691809e9e00dd100faa53de9b341e75d9cd5a8a620930408680909bf806" protocol=ttrpc version=3 Dec 16 12:29:45.698969 containerd[1921]: time="2025-12-16T12:29:45.698934239Z" level=info msg="Container 56bdc5b80ffee8da67a70717539644745706722407862020831769f2a8e28ff8: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:45.700711 systemd[1]: Started cri-containerd-19511bbac943b54e8a58a911f4354b3e7010e4edf2e9538f0e3932b9da75e786.scope - libcontainer container 19511bbac943b54e8a58a911f4354b3e7010e4edf2e9538f0e3932b9da75e786. Dec 16 12:29:45.717193 containerd[1921]: time="2025-12-16T12:29:45.717161753Z" level=info msg="Container dd5d0c1b1a09696409b19e26455bcbfdf9e26c09aed7cf85514bda66770f2a0b: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:45.725411 containerd[1921]: time="2025-12-16T12:29:45.725376641Z" level=info msg="CreateContainer within sandbox \"e53ec0e3fb52f68b1d26cf701cee7493e15f46bc530f0a9d0f016046cf9c5bfa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"56bdc5b80ffee8da67a70717539644745706722407862020831769f2a8e28ff8\"" Dec 16 12:29:45.726749 containerd[1921]: time="2025-12-16T12:29:45.726023206Z" level=info msg="StartContainer for \"56bdc5b80ffee8da67a70717539644745706722407862020831769f2a8e28ff8\"" Dec 16 12:29:45.726929 containerd[1921]: time="2025-12-16T12:29:45.726882176Z" level=info msg="connecting to shim 56bdc5b80ffee8da67a70717539644745706722407862020831769f2a8e28ff8" address="unix:///run/containerd/s/5b2dc14fb594a52a1fd9df3802d87b76c109fb355f1cf95832ae467327a6b595" protocol=ttrpc version=3 Dec 16 12:29:45.734544 containerd[1921]: time="2025-12-16T12:29:45.734509063Z" level=info msg="CreateContainer within sandbox \"09461c245a80620ed96fd2a8bbcfb24a518c1ccabc9acb76a01445460bceddea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dd5d0c1b1a09696409b19e26455bcbfdf9e26c09aed7cf85514bda66770f2a0b\"" Dec 16 12:29:45.735397 containerd[1921]: time="2025-12-16T12:29:45.735358305Z" level=info msg="StartContainer for \"dd5d0c1b1a09696409b19e26455bcbfdf9e26c09aed7cf85514bda66770f2a0b\"" Dec 16 12:29:45.736235 containerd[1921]: time="2025-12-16T12:29:45.736211100Z" level=info msg="connecting to shim dd5d0c1b1a09696409b19e26455bcbfdf9e26c09aed7cf85514bda66770f2a0b" address="unix:///run/containerd/s/f6de98bb3fd0656339402b8b5fec087c72c7bc256c0fabab37bbe19400c29478" protocol=ttrpc version=3 Dec 16 12:29:45.748884 containerd[1921]: time="2025-12-16T12:29:45.748788733Z" level=info msg="StartContainer for \"19511bbac943b54e8a58a911f4354b3e7010e4edf2e9538f0e3932b9da75e786\" returns successfully" Dec 16 12:29:45.750705 systemd[1]: Started cri-containerd-56bdc5b80ffee8da67a70717539644745706722407862020831769f2a8e28ff8.scope - libcontainer container 56bdc5b80ffee8da67a70717539644745706722407862020831769f2a8e28ff8. Dec 16 12:29:45.766335 kubelet[3056]: E1216 12:29:45.766048 3056 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:45.769708 systemd[1]: Started cri-containerd-dd5d0c1b1a09696409b19e26455bcbfdf9e26c09aed7cf85514bda66770f2a0b.scope - libcontainer container dd5d0c1b1a09696409b19e26455bcbfdf9e26c09aed7cf85514bda66770f2a0b. Dec 16 12:29:45.783465 kubelet[3056]: E1216 12:29:45.783438 3056 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.4:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:29:45.812569 containerd[1921]: time="2025-12-16T12:29:45.812525270Z" level=info msg="StartContainer for \"56bdc5b80ffee8da67a70717539644745706722407862020831769f2a8e28ff8\" returns successfully" Dec 16 12:29:45.841571 containerd[1921]: time="2025-12-16T12:29:45.841524400Z" level=info msg="StartContainer for \"dd5d0c1b1a09696409b19e26455bcbfdf9e26c09aed7cf85514bda66770f2a0b\" returns successfully" Dec 16 12:29:46.769219 kubelet[3056]: E1216 12:29:46.768998 3056 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:46.773184 kubelet[3056]: E1216 12:29:46.772997 3056 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:46.774163 kubelet[3056]: E1216 12:29:46.774148 3056 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:46.888026 kubelet[3056]: I1216 12:29:46.887983 3056 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:46.952299 kubelet[3056]: E1216 12:29:46.952257 3056 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-a-719f16aeb7\" not found" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:47.124499 kubelet[3056]: I1216 12:29:47.124463 3056 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:47.124499 kubelet[3056]: E1216 12:29:47.124499 3056 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4459.2.2-a-719f16aeb7\": node \"ci-4459.2.2-a-719f16aeb7\" not found" Dec 16 12:29:47.140036 kubelet[3056]: E1216 12:29:47.139991 3056 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" Dec 16 12:29:47.240951 kubelet[3056]: E1216 12:29:47.240908 3056 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" Dec 16 12:29:47.341260 kubelet[3056]: E1216 12:29:47.341225 3056 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" Dec 16 12:29:47.441912 kubelet[3056]: E1216 12:29:47.441783 3056 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" Dec 16 12:29:47.542403 kubelet[3056]: E1216 12:29:47.542364 3056 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" Dec 16 12:29:47.642947 kubelet[3056]: E1216 12:29:47.642900 3056 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" Dec 16 12:29:47.743228 kubelet[3056]: E1216 12:29:47.743100 3056 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" Dec 16 12:29:47.775048 kubelet[3056]: E1216 12:29:47.774875 3056 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:47.775048 kubelet[3056]: E1216 12:29:47.774976 3056 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:47.803890 kubelet[3056]: I1216 12:29:47.803634 3056 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:47.807370 kubelet[3056]: E1216 12:29:47.807348 3056 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-719f16aeb7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:47.807481 kubelet[3056]: I1216 12:29:47.807469 3056 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:47.808928 kubelet[3056]: E1216 12:29:47.808897 3056 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-719f16aeb7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:47.808928 kubelet[3056]: I1216 12:29:47.808920 3056 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:47.811733 kubelet[3056]: E1216 12:29:47.811705 3056 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-719f16aeb7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:48.689566 kubelet[3056]: I1216 12:29:48.689520 3056 apiserver.go:52] "Watching apiserver" Dec 16 12:29:48.699550 kubelet[3056]: I1216 12:29:48.699521 3056 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:29:48.774533 kubelet[3056]: I1216 12:29:48.774239 3056 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:48.780799 kubelet[3056]: I1216 12:29:48.780733 3056 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:29:49.151844 systemd[1]: Reload requested from client PID 3338 ('systemctl') (unit session-9.scope)... Dec 16 12:29:49.151864 systemd[1]: Reloading... Dec 16 12:29:49.247589 zram_generator::config[3385]: No configuration found. Dec 16 12:29:49.437241 systemd[1]: Reloading finished in 285 ms. Dec 16 12:29:49.465742 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:49.477993 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:29:49.478317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:49.478446 systemd[1]: kubelet.service: Consumed 478ms CPU time, 119.9M memory peak. Dec 16 12:29:49.480572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:49.577698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:49.581134 (kubelet)[3449]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:29:49.608359 kubelet[3449]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:29:49.608359 kubelet[3449]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:29:49.608359 kubelet[3449]: I1216 12:29:49.608310 3449 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:29:49.616279 kubelet[3449]: I1216 12:29:49.615555 3449 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 16 12:29:49.616279 kubelet[3449]: I1216 12:29:49.615590 3449 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:29:49.616279 kubelet[3449]: I1216 12:29:49.615614 3449 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 16 12:29:49.616279 kubelet[3449]: I1216 12:29:49.615619 3449 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:29:49.616279 kubelet[3449]: I1216 12:29:49.615856 3449 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:29:49.618613 kubelet[3449]: I1216 12:29:49.618550 3449 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:29:49.620121 kubelet[3449]: I1216 12:29:49.620069 3449 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:29:49.623070 kubelet[3449]: I1216 12:29:49.623036 3449 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:29:49.625591 kubelet[3449]: I1216 12:29:49.625309 3449 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 16 12:29:49.625591 kubelet[3449]: I1216 12:29:49.625498 3449 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:29:49.625817 kubelet[3449]: I1216 12:29:49.625520 3449 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-719f16aeb7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:29:49.625922 kubelet[3449]: I1216 12:29:49.625910 3449 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:29:49.625967 kubelet[3449]: I1216 12:29:49.625959 3449 container_manager_linux.go:306] "Creating device plugin manager" Dec 16 12:29:49.626030 kubelet[3449]: I1216 12:29:49.626020 3449 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 16 12:29:49.626688 kubelet[3449]: I1216 12:29:49.626668 3449 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:29:49.626974 kubelet[3449]: I1216 12:29:49.626954 3449 kubelet.go:475] "Attempting to sync node with API server" Dec 16 12:29:49.627060 kubelet[3449]: I1216 12:29:49.627050 3449 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:29:49.627115 kubelet[3449]: I1216 12:29:49.627108 3449 kubelet.go:387] "Adding apiserver pod source" Dec 16 12:29:49.627169 kubelet[3449]: I1216 12:29:49.627161 3449 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:29:49.628537 kubelet[3449]: I1216 12:29:49.628522 3449 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:29:49.629095 kubelet[3449]: I1216 12:29:49.629081 3449 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:29:49.629237 kubelet[3449]: I1216 12:29:49.629136 3449 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 16 12:29:49.631096 kubelet[3449]: I1216 12:29:49.631042 3449 server.go:1262] "Started kubelet" Dec 16 12:29:49.634654 kubelet[3449]: I1216 12:29:49.634640 3449 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:29:49.640208 kubelet[3449]: I1216 12:29:49.640169 3449 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:29:49.640912 kubelet[3449]: I1216 12:29:49.640892 3449 server.go:310] "Adding debug handlers to kubelet server" Dec 16 12:29:49.647595 kubelet[3449]: I1216 12:29:49.646718 3449 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:29:49.647917 kubelet[3449]: I1216 12:29:49.647893 3449 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 16 12:29:49.648839 kubelet[3449]: E1216 12:29:49.648062 3449 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-719f16aeb7\" not found" Dec 16 12:29:49.649228 kubelet[3449]: I1216 12:29:49.648912 3449 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 16 12:29:49.649319 kubelet[3449]: I1216 12:29:49.649297 3449 reconciler.go:29] "Reconciler: start to sync state" Dec 16 12:29:49.650379 kubelet[3449]: I1216 12:29:49.650340 3449 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:29:49.650446 kubelet[3449]: I1216 12:29:49.650392 3449 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 16 12:29:49.650564 kubelet[3449]: I1216 12:29:49.650531 3449 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:29:49.654327 kubelet[3449]: I1216 12:29:49.654273 3449 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:29:49.654520 kubelet[3449]: I1216 12:29:49.654348 3449 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:29:49.659870 kubelet[3449]: E1216 12:29:49.659833 3449 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:29:49.660503 kubelet[3449]: I1216 12:29:49.660437 3449 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:29:49.670575 kubelet[3449]: I1216 12:29:49.670419 3449 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 16 12:29:49.673498 kubelet[3449]: I1216 12:29:49.673444 3449 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 16 12:29:49.673498 kubelet[3449]: I1216 12:29:49.673462 3449 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 16 12:29:49.673498 kubelet[3449]: I1216 12:29:49.673481 3449 kubelet.go:2427] "Starting kubelet main sync loop" Dec 16 12:29:49.673722 kubelet[3449]: E1216 12:29:49.673516 3449 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:29:49.706900 kubelet[3449]: I1216 12:29:49.706125 3449 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:29:49.706900 kubelet[3449]: I1216 12:29:49.706141 3449 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:29:49.706900 kubelet[3449]: I1216 12:29:49.706160 3449 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:29:49.706900 kubelet[3449]: I1216 12:29:49.706257 3449 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:29:49.706900 kubelet[3449]: I1216 12:29:49.706264 3449 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:29:49.706900 kubelet[3449]: I1216 12:29:49.706276 3449 policy_none.go:49] "None policy: Start" Dec 16 12:29:49.706900 kubelet[3449]: I1216 12:29:49.706282 3449 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 16 12:29:49.706900 kubelet[3449]: I1216 12:29:49.706289 3449 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 16 12:29:49.706900 kubelet[3449]: I1216 12:29:49.706365 3449 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 16 12:29:49.706900 kubelet[3449]: I1216 12:29:49.706372 3449 policy_none.go:47] "Start" Dec 16 12:29:49.711211 kubelet[3449]: E1216 12:29:49.711181 3449 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:29:49.711610 kubelet[3449]: I1216 12:29:49.711348 3449 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:29:49.711610 kubelet[3449]: I1216 12:29:49.711362 3449 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:29:49.711610 kubelet[3449]: I1216 12:29:49.711600 3449 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:29:49.712767 kubelet[3449]: E1216 12:29:49.712691 3449 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:29:49.774213 kubelet[3449]: I1216 12:29:49.774176 3449 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.774641 kubelet[3449]: I1216 12:29:49.774619 3449 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.774905 kubelet[3449]: I1216 12:29:49.774889 3449 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.784062 kubelet[3449]: I1216 12:29:49.783996 3449 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:29:49.784062 kubelet[3449]: I1216 12:29:49.784044 3449 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:29:49.784152 kubelet[3449]: I1216 12:29:49.784127 3449 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:29:49.784321 kubelet[3449]: E1216 12:29:49.784291 3449 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-719f16aeb7\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.814525 kubelet[3449]: I1216 12:29:49.814496 3449 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.829891 kubelet[3449]: I1216 12:29:49.829852 3449 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.829969 kubelet[3449]: I1216 12:29:49.829925 3449 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.850367 kubelet[3449]: I1216 12:29:49.850333 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bdc1ef30c75fa5ccabd9c83da47f9489-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-719f16aeb7\" (UID: \"bdc1ef30c75fa5ccabd9c83da47f9489\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.850367 kubelet[3449]: I1216 12:29:49.850367 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bdc1ef30c75fa5ccabd9c83da47f9489-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-719f16aeb7\" (UID: \"bdc1ef30c75fa5ccabd9c83da47f9489\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.850490 kubelet[3449]: I1216 12:29:49.850381 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84ec079b9e3efee9656ee1c015350147-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-719f16aeb7\" (UID: \"84ec079b9e3efee9656ee1c015350147\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.850490 kubelet[3449]: I1216 12:29:49.850392 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84ec079b9e3efee9656ee1c015350147-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-719f16aeb7\" (UID: \"84ec079b9e3efee9656ee1c015350147\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.850490 kubelet[3449]: I1216 12:29:49.850401 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84ec079b9e3efee9656ee1c015350147-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-719f16aeb7\" (UID: \"84ec079b9e3efee9656ee1c015350147\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.850490 kubelet[3449]: I1216 12:29:49.850410 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abaffc9b3ef0745c4ba081fc7cb78110-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-719f16aeb7\" (UID: \"abaffc9b3ef0745c4ba081fc7cb78110\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.850490 kubelet[3449]: I1216 12:29:49.850421 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bdc1ef30c75fa5ccabd9c83da47f9489-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-719f16aeb7\" (UID: \"bdc1ef30c75fa5ccabd9c83da47f9489\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.850588 kubelet[3449]: I1216 12:29:49.850430 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84ec079b9e3efee9656ee1c015350147-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-719f16aeb7\" (UID: \"84ec079b9e3efee9656ee1c015350147\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:49.850588 kubelet[3449]: I1216 12:29:49.850441 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84ec079b9e3efee9656ee1c015350147-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-719f16aeb7\" (UID: \"84ec079b9e3efee9656ee1c015350147\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:50.627819 kubelet[3449]: I1216 12:29:50.627544 3449 apiserver.go:52] "Watching apiserver" Dec 16 12:29:50.650257 kubelet[3449]: I1216 12:29:50.649978 3449 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 16 12:29:50.693312 kubelet[3449]: I1216 12:29:50.692726 3449 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:50.693862 kubelet[3449]: I1216 12:29:50.693822 3449 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:50.706310 kubelet[3449]: I1216 12:29:50.706278 3449 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:29:50.706574 kubelet[3449]: E1216 12:29:50.706548 3449 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-719f16aeb7\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:50.708671 kubelet[3449]: I1216 12:29:50.708648 3449 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:29:50.708747 kubelet[3449]: E1216 12:29:50.708686 3449 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-719f16aeb7\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-a-719f16aeb7" Dec 16 12:29:50.713487 kubelet[3449]: I1216 12:29:50.713367 3449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-719f16aeb7" podStartSLOduration=1.7133584210000001 podStartE2EDuration="1.713358421s" podCreationTimestamp="2025-12-16 12:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:29:50.713316796 +0000 UTC m=+1.129698228" watchObservedRunningTime="2025-12-16 12:29:50.713358421 +0000 UTC m=+1.129739853" Dec 16 12:29:50.771245 kubelet[3449]: I1216 12:29:50.771180 3449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-a-719f16aeb7" podStartSLOduration=1.771165205 podStartE2EDuration="1.771165205s" podCreationTimestamp="2025-12-16 12:29:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:29:50.731678268 +0000 UTC m=+1.148059700" watchObservedRunningTime="2025-12-16 12:29:50.771165205 +0000 UTC m=+1.187546645" Dec 16 12:29:54.156206 kubelet[3449]: I1216 12:29:54.156167 3449 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:29:54.156776 containerd[1921]: time="2025-12-16T12:29:54.156634079Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:29:54.157122 kubelet[3449]: I1216 12:29:54.156856 3449 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:29:55.076827 kubelet[3449]: I1216 12:29:55.076754 3449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-a-719f16aeb7" podStartSLOduration=7.07673469 podStartE2EDuration="7.07673469s" podCreationTimestamp="2025-12-16 12:29:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:29:50.772325228 +0000 UTC m=+1.188706660" watchObservedRunningTime="2025-12-16 12:29:55.07673469 +0000 UTC m=+5.493116130" Dec 16 12:29:55.092355 systemd[1]: Created slice kubepods-besteffort-pod5e8031c8_e624_49fa_baa8_145e72b33f07.slice - libcontainer container kubepods-besteffort-pod5e8031c8_e624_49fa_baa8_145e72b33f07.slice. Dec 16 12:29:55.178952 kubelet[3449]: I1216 12:29:55.178855 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfsnj\" (UniqueName: \"kubernetes.io/projected/5e8031c8-e624-49fa-baa8-145e72b33f07-kube-api-access-xfsnj\") pod \"kube-proxy-26x26\" (UID: \"5e8031c8-e624-49fa-baa8-145e72b33f07\") " pod="kube-system/kube-proxy-26x26" Dec 16 12:29:55.178952 kubelet[3449]: I1216 12:29:55.178891 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e8031c8-e624-49fa-baa8-145e72b33f07-kube-proxy\") pod \"kube-proxy-26x26\" (UID: \"5e8031c8-e624-49fa-baa8-145e72b33f07\") " pod="kube-system/kube-proxy-26x26" Dec 16 12:29:55.178952 kubelet[3449]: I1216 12:29:55.178904 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e8031c8-e624-49fa-baa8-145e72b33f07-xtables-lock\") pod \"kube-proxy-26x26\" (UID: \"5e8031c8-e624-49fa-baa8-145e72b33f07\") " pod="kube-system/kube-proxy-26x26" Dec 16 12:29:55.178952 kubelet[3449]: I1216 12:29:55.178913 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e8031c8-e624-49fa-baa8-145e72b33f07-lib-modules\") pod \"kube-proxy-26x26\" (UID: \"5e8031c8-e624-49fa-baa8-145e72b33f07\") " pod="kube-system/kube-proxy-26x26" Dec 16 12:29:55.315359 systemd[1]: Created slice kubepods-besteffort-pod550502ab_9b46_4064_b313_b97ff086e43f.slice - libcontainer container kubepods-besteffort-pod550502ab_9b46_4064_b313_b97ff086e43f.slice. Dec 16 12:29:55.380371 kubelet[3449]: I1216 12:29:55.380232 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/550502ab-9b46-4064-b313-b97ff086e43f-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-2v6sn\" (UID: \"550502ab-9b46-4064-b313-b97ff086e43f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-2v6sn" Dec 16 12:29:55.380371 kubelet[3449]: I1216 12:29:55.380275 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw7hr\" (UniqueName: \"kubernetes.io/projected/550502ab-9b46-4064-b313-b97ff086e43f-kube-api-access-gw7hr\") pod \"tigera-operator-65cdcdfd6d-2v6sn\" (UID: \"550502ab-9b46-4064-b313-b97ff086e43f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-2v6sn" Dec 16 12:29:55.405170 containerd[1921]: time="2025-12-16T12:29:55.405121368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-26x26,Uid:5e8031c8-e624-49fa-baa8-145e72b33f07,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:55.447971 containerd[1921]: time="2025-12-16T12:29:55.447918681Z" level=info msg="connecting to shim df3b6e5a69c00423003f67ea5dfb25d30bce032ff49917cffac041e2b39a800d" address="unix:///run/containerd/s/691960df3cbc2fe15be7f4d70a2935bc69fb9b63bd01eea0917d66783b34bf80" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:55.469699 systemd[1]: Started cri-containerd-df3b6e5a69c00423003f67ea5dfb25d30bce032ff49917cffac041e2b39a800d.scope - libcontainer container df3b6e5a69c00423003f67ea5dfb25d30bce032ff49917cffac041e2b39a800d. Dec 16 12:29:55.497659 containerd[1921]: time="2025-12-16T12:29:55.497619418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-26x26,Uid:5e8031c8-e624-49fa-baa8-145e72b33f07,Namespace:kube-system,Attempt:0,} returns sandbox id \"df3b6e5a69c00423003f67ea5dfb25d30bce032ff49917cffac041e2b39a800d\"" Dec 16 12:29:55.505469 containerd[1921]: time="2025-12-16T12:29:55.505426249Z" level=info msg="CreateContainer within sandbox \"df3b6e5a69c00423003f67ea5dfb25d30bce032ff49917cffac041e2b39a800d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:29:55.529660 containerd[1921]: time="2025-12-16T12:29:55.529623868Z" level=info msg="Container 48350a84190226caab93e3fb32a57ff8ae05c3222ca448ce79408249a47f5966: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:55.549183 containerd[1921]: time="2025-12-16T12:29:55.549141355Z" level=info msg="CreateContainer within sandbox \"df3b6e5a69c00423003f67ea5dfb25d30bce032ff49917cffac041e2b39a800d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"48350a84190226caab93e3fb32a57ff8ae05c3222ca448ce79408249a47f5966\"" Dec 16 12:29:55.549823 containerd[1921]: time="2025-12-16T12:29:55.549780908Z" level=info msg="StartContainer for \"48350a84190226caab93e3fb32a57ff8ae05c3222ca448ce79408249a47f5966\"" Dec 16 12:29:55.550854 containerd[1921]: time="2025-12-16T12:29:55.550808839Z" level=info msg="connecting to shim 48350a84190226caab93e3fb32a57ff8ae05c3222ca448ce79408249a47f5966" address="unix:///run/containerd/s/691960df3cbc2fe15be7f4d70a2935bc69fb9b63bd01eea0917d66783b34bf80" protocol=ttrpc version=3 Dec 16 12:29:55.569690 systemd[1]: Started cri-containerd-48350a84190226caab93e3fb32a57ff8ae05c3222ca448ce79408249a47f5966.scope - libcontainer container 48350a84190226caab93e3fb32a57ff8ae05c3222ca448ce79408249a47f5966. Dec 16 12:29:55.624652 containerd[1921]: time="2025-12-16T12:29:55.624525974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-2v6sn,Uid:550502ab-9b46-4064-b313-b97ff086e43f,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:29:55.636028 containerd[1921]: time="2025-12-16T12:29:55.635670686Z" level=info msg="StartContainer for \"48350a84190226caab93e3fb32a57ff8ae05c3222ca448ce79408249a47f5966\" returns successfully" Dec 16 12:29:55.660296 containerd[1921]: time="2025-12-16T12:29:55.660259516Z" level=info msg="connecting to shim e0688477e86418e46501b9e8c3b656fb2d4b9dc6a64497a6ca8bbeeb09289e4d" address="unix:///run/containerd/s/7af6a7dcd697626d7054e357943ade4b7f70f6878a6297e63ebcc8de15ad05ad" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:55.679849 systemd[1]: Started cri-containerd-e0688477e86418e46501b9e8c3b656fb2d4b9dc6a64497a6ca8bbeeb09289e4d.scope - libcontainer container e0688477e86418e46501b9e8c3b656fb2d4b9dc6a64497a6ca8bbeeb09289e4d. Dec 16 12:29:55.724715 containerd[1921]: time="2025-12-16T12:29:55.724652355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-2v6sn,Uid:550502ab-9b46-4064-b313-b97ff086e43f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e0688477e86418e46501b9e8c3b656fb2d4b9dc6a64497a6ca8bbeeb09289e4d\"" Dec 16 12:29:55.726293 containerd[1921]: time="2025-12-16T12:29:55.726254301Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:29:56.185174 kubelet[3449]: I1216 12:29:56.185107 3449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-26x26" podStartSLOduration=1.185091718 podStartE2EDuration="1.185091718s" podCreationTimestamp="2025-12-16 12:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:29:55.716813346 +0000 UTC m=+6.133194778" watchObservedRunningTime="2025-12-16 12:29:56.185091718 +0000 UTC m=+6.601473150" Dec 16 12:29:56.298017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4096606664.mount: Deactivated successfully. Dec 16 12:29:57.166116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939309617.mount: Deactivated successfully. Dec 16 12:29:57.854131 containerd[1921]: time="2025-12-16T12:29:57.854075653Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:57.857569 containerd[1921]: time="2025-12-16T12:29:57.857530720Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 16 12:29:57.860986 containerd[1921]: time="2025-12-16T12:29:57.860948035Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:57.865707 containerd[1921]: time="2025-12-16T12:29:57.865659528Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:57.866000 containerd[1921]: time="2025-12-16T12:29:57.865859998Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.139577671s" Dec 16 12:29:57.866000 containerd[1921]: time="2025-12-16T12:29:57.865883478Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:29:57.875011 containerd[1921]: time="2025-12-16T12:29:57.874986640Z" level=info msg="CreateContainer within sandbox \"e0688477e86418e46501b9e8c3b656fb2d4b9dc6a64497a6ca8bbeeb09289e4d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:29:57.894018 containerd[1921]: time="2025-12-16T12:29:57.893977097Z" level=info msg="Container 37f2bd80d8ea25ba9cc8322666e16f48b57aef1f472a7bb9c5d771371c435327: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:57.909404 containerd[1921]: time="2025-12-16T12:29:57.909370154Z" level=info msg="CreateContainer within sandbox \"e0688477e86418e46501b9e8c3b656fb2d4b9dc6a64497a6ca8bbeeb09289e4d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"37f2bd80d8ea25ba9cc8322666e16f48b57aef1f472a7bb9c5d771371c435327\"" Dec 16 12:29:57.910368 containerd[1921]: time="2025-12-16T12:29:57.910341428Z" level=info msg="StartContainer for \"37f2bd80d8ea25ba9cc8322666e16f48b57aef1f472a7bb9c5d771371c435327\"" Dec 16 12:29:57.911925 containerd[1921]: time="2025-12-16T12:29:57.911900063Z" level=info msg="connecting to shim 37f2bd80d8ea25ba9cc8322666e16f48b57aef1f472a7bb9c5d771371c435327" address="unix:///run/containerd/s/7af6a7dcd697626d7054e357943ade4b7f70f6878a6297e63ebcc8de15ad05ad" protocol=ttrpc version=3 Dec 16 12:29:57.932708 systemd[1]: Started cri-containerd-37f2bd80d8ea25ba9cc8322666e16f48b57aef1f472a7bb9c5d771371c435327.scope - libcontainer container 37f2bd80d8ea25ba9cc8322666e16f48b57aef1f472a7bb9c5d771371c435327. Dec 16 12:29:57.958161 containerd[1921]: time="2025-12-16T12:29:57.958129578Z" level=info msg="StartContainer for \"37f2bd80d8ea25ba9cc8322666e16f48b57aef1f472a7bb9c5d771371c435327\" returns successfully" Dec 16 12:29:58.718914 kubelet[3449]: I1216 12:29:58.718715 3449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-2v6sn" podStartSLOduration=1.577925467 podStartE2EDuration="3.718697331s" podCreationTimestamp="2025-12-16 12:29:55 +0000 UTC" firstStartedPulling="2025-12-16 12:29:55.725875347 +0000 UTC m=+6.142256779" lastFinishedPulling="2025-12-16 12:29:57.866647211 +0000 UTC m=+8.283028643" observedRunningTime="2025-12-16 12:29:58.718348273 +0000 UTC m=+9.134729713" watchObservedRunningTime="2025-12-16 12:29:58.718697331 +0000 UTC m=+9.135078771" Dec 16 12:30:03.204223 sudo[2359]: pam_unix(sudo:session): session closed for user root Dec 16 12:30:03.282665 sshd[2358]: Connection closed by 10.200.16.10 port 44290 Dec 16 12:30:03.283766 sshd-session[2355]: pam_unix(sshd:session): session closed for user core Dec 16 12:30:03.286708 systemd[1]: sshd@6-10.200.20.4:22-10.200.16.10:44290.service: Deactivated successfully. Dec 16 12:30:03.289512 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:30:03.291812 systemd[1]: session-9.scope: Consumed 3.843s CPU time, 221.1M memory peak. Dec 16 12:30:03.296031 systemd-logind[1858]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:30:03.299803 systemd-logind[1858]: Removed session 9. Dec 16 12:30:11.216364 systemd[1]: Created slice kubepods-besteffort-pod89f07f6a_45ae_420c_9b29_d494fbb61acf.slice - libcontainer container kubepods-besteffort-pod89f07f6a_45ae_420c_9b29_d494fbb61acf.slice. Dec 16 12:30:11.270104 kubelet[3449]: I1216 12:30:11.270005 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/89f07f6a-45ae-420c-9b29-d494fbb61acf-typha-certs\") pod \"calico-typha-f77b5fdf9-7bl57\" (UID: \"89f07f6a-45ae-420c-9b29-d494fbb61acf\") " pod="calico-system/calico-typha-f77b5fdf9-7bl57" Dec 16 12:30:11.270104 kubelet[3449]: I1216 12:30:11.270049 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xvpm\" (UniqueName: \"kubernetes.io/projected/89f07f6a-45ae-420c-9b29-d494fbb61acf-kube-api-access-7xvpm\") pod \"calico-typha-f77b5fdf9-7bl57\" (UID: \"89f07f6a-45ae-420c-9b29-d494fbb61acf\") " pod="calico-system/calico-typha-f77b5fdf9-7bl57" Dec 16 12:30:11.270104 kubelet[3449]: I1216 12:30:11.270089 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89f07f6a-45ae-420c-9b29-d494fbb61acf-tigera-ca-bundle\") pod \"calico-typha-f77b5fdf9-7bl57\" (UID: \"89f07f6a-45ae-420c-9b29-d494fbb61acf\") " pod="calico-system/calico-typha-f77b5fdf9-7bl57" Dec 16 12:30:11.389866 systemd[1]: Created slice kubepods-besteffort-podbbd55a1c_545c_4470_808a_79cec992eefd.slice - libcontainer container kubepods-besteffort-podbbd55a1c_545c_4470_808a_79cec992eefd.slice. Dec 16 12:30:11.471814 kubelet[3449]: I1216 12:30:11.471676 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbd55a1c-545c-4470-808a-79cec992eefd-tigera-ca-bundle\") pod \"calico-node-t46c2\" (UID: \"bbd55a1c-545c-4470-808a-79cec992eefd\") " pod="calico-system/calico-node-t46c2" Dec 16 12:30:11.471814 kubelet[3449]: I1216 12:30:11.471720 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bbd55a1c-545c-4470-808a-79cec992eefd-var-lib-calico\") pod \"calico-node-t46c2\" (UID: \"bbd55a1c-545c-4470-808a-79cec992eefd\") " pod="calico-system/calico-node-t46c2" Dec 16 12:30:11.471814 kubelet[3449]: I1216 12:30:11.471732 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bbd55a1c-545c-4470-808a-79cec992eefd-cni-net-dir\") pod \"calico-node-t46c2\" (UID: \"bbd55a1c-545c-4470-808a-79cec992eefd\") " pod="calico-system/calico-node-t46c2" Dec 16 12:30:11.471814 kubelet[3449]: I1216 12:30:11.471740 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbd55a1c-545c-4470-808a-79cec992eefd-lib-modules\") pod \"calico-node-t46c2\" (UID: \"bbd55a1c-545c-4470-808a-79cec992eefd\") " pod="calico-system/calico-node-t46c2" Dec 16 12:30:11.471814 kubelet[3449]: I1216 12:30:11.471750 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bbd55a1c-545c-4470-808a-79cec992eefd-node-certs\") pod \"calico-node-t46c2\" (UID: \"bbd55a1c-545c-4470-808a-79cec992eefd\") " pod="calico-system/calico-node-t46c2" Dec 16 12:30:11.472007 kubelet[3449]: I1216 12:30:11.471760 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bbd55a1c-545c-4470-808a-79cec992eefd-flexvol-driver-host\") pod \"calico-node-t46c2\" (UID: \"bbd55a1c-545c-4470-808a-79cec992eefd\") " pod="calico-system/calico-node-t46c2" Dec 16 12:30:11.472007 kubelet[3449]: I1216 12:30:11.471773 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bbd55a1c-545c-4470-808a-79cec992eefd-cni-bin-dir\") pod \"calico-node-t46c2\" (UID: \"bbd55a1c-545c-4470-808a-79cec992eefd\") " pod="calico-system/calico-node-t46c2" Dec 16 12:30:11.472007 kubelet[3449]: I1216 12:30:11.471783 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bbd55a1c-545c-4470-808a-79cec992eefd-policysync\") pod \"calico-node-t46c2\" (UID: \"bbd55a1c-545c-4470-808a-79cec992eefd\") " pod="calico-system/calico-node-t46c2" Dec 16 12:30:11.472007 kubelet[3449]: I1216 12:30:11.471797 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bbd55a1c-545c-4470-808a-79cec992eefd-var-run-calico\") pod \"calico-node-t46c2\" (UID: \"bbd55a1c-545c-4470-808a-79cec992eefd\") " pod="calico-system/calico-node-t46c2" Dec 16 12:30:11.472007 kubelet[3449]: I1216 12:30:11.471808 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqngv\" (UniqueName: \"kubernetes.io/projected/bbd55a1c-545c-4470-808a-79cec992eefd-kube-api-access-fqngv\") pod \"calico-node-t46c2\" (UID: \"bbd55a1c-545c-4470-808a-79cec992eefd\") " pod="calico-system/calico-node-t46c2" Dec 16 12:30:11.472084 kubelet[3449]: I1216 12:30:11.471819 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bbd55a1c-545c-4470-808a-79cec992eefd-cni-log-dir\") pod \"calico-node-t46c2\" (UID: \"bbd55a1c-545c-4470-808a-79cec992eefd\") " pod="calico-system/calico-node-t46c2" Dec 16 12:30:11.472084 kubelet[3449]: I1216 12:30:11.471828 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbd55a1c-545c-4470-808a-79cec992eefd-xtables-lock\") pod \"calico-node-t46c2\" (UID: \"bbd55a1c-545c-4470-808a-79cec992eefd\") " pod="calico-system/calico-node-t46c2" Dec 16 12:30:11.525267 containerd[1921]: time="2025-12-16T12:30:11.525223750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f77b5fdf9-7bl57,Uid:89f07f6a-45ae-420c-9b29-d494fbb61acf,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:11.569349 containerd[1921]: time="2025-12-16T12:30:11.569283892Z" level=info msg="connecting to shim 7a5a10a7dc024c640daa6536b75c76aa57e58310f778d16f2fc5afab6ed7bb6b" address="unix:///run/containerd/s/563f98db2dd7503f5fdaa993861bd1fcbec29a356e077499e553663d1b078025" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:11.590040 kubelet[3449]: E1216 12:30:11.590001 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.590040 kubelet[3449]: W1216 12:30:11.590026 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.590040 kubelet[3449]: E1216 12:30:11.590051 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.593808 kubelet[3449]: E1216 12:30:11.593719 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.593808 kubelet[3449]: W1216 12:30:11.593743 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.593808 kubelet[3449]: E1216 12:30:11.593761 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.609880 systemd[1]: Started cri-containerd-7a5a10a7dc024c640daa6536b75c76aa57e58310f778d16f2fc5afab6ed7bb6b.scope - libcontainer container 7a5a10a7dc024c640daa6536b75c76aa57e58310f778d16f2fc5afab6ed7bb6b. Dec 16 12:30:11.650110 containerd[1921]: time="2025-12-16T12:30:11.650067765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f77b5fdf9-7bl57,Uid:89f07f6a-45ae-420c-9b29-d494fbb61acf,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a5a10a7dc024c640daa6536b75c76aa57e58310f778d16f2fc5afab6ed7bb6b\"" Dec 16 12:30:11.653442 containerd[1921]: time="2025-12-16T12:30:11.653207917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:30:11.690952 kubelet[3449]: E1216 12:30:11.690909 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:30:11.705647 containerd[1921]: time="2025-12-16T12:30:11.704223039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t46c2,Uid:bbd55a1c-545c-4470-808a-79cec992eefd,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:11.754385 containerd[1921]: time="2025-12-16T12:30:11.753938779Z" level=info msg="connecting to shim 9a0bcda73e9f1dd506cdf63aceb0aad7bc12603d98bf73697ebcdd71b86eee52" address="unix:///run/containerd/s/37ebb3a46bb01d6b1323750f564c0f8ecfc4455e54b2cb098d6fe26c77673607" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:11.768392 kubelet[3449]: E1216 12:30:11.768059 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.768392 kubelet[3449]: W1216 12:30:11.768084 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.768392 kubelet[3449]: E1216 12:30:11.768105 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.768392 kubelet[3449]: E1216 12:30:11.768218 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.768392 kubelet[3449]: W1216 12:30:11.768224 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.768392 kubelet[3449]: E1216 12:30:11.768271 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.768392 kubelet[3449]: E1216 12:30:11.768369 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.768392 kubelet[3449]: W1216 12:30:11.768380 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.768392 kubelet[3449]: E1216 12:30:11.768386 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.768635 kubelet[3449]: E1216 12:30:11.768483 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.768635 kubelet[3449]: W1216 12:30:11.768487 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.768635 kubelet[3449]: E1216 12:30:11.768493 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.768635 kubelet[3449]: E1216 12:30:11.768607 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.768635 kubelet[3449]: W1216 12:30:11.768612 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.768635 kubelet[3449]: E1216 12:30:11.768618 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.768974 kubelet[3449]: E1216 12:30:11.768731 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.768974 kubelet[3449]: W1216 12:30:11.768740 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.768974 kubelet[3449]: E1216 12:30:11.768746 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.768974 kubelet[3449]: E1216 12:30:11.768964 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.768974 kubelet[3449]: W1216 12:30:11.768970 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.769257 kubelet[3449]: E1216 12:30:11.768981 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.769370 kubelet[3449]: E1216 12:30:11.769342 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.769370 kubelet[3449]: W1216 12:30:11.769357 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.769370 kubelet[3449]: E1216 12:30:11.769368 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.769866 kubelet[3449]: E1216 12:30:11.769694 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.769866 kubelet[3449]: W1216 12:30:11.769707 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.769866 kubelet[3449]: E1216 12:30:11.769717 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.770230 kubelet[3449]: E1216 12:30:11.770208 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.770776 kubelet[3449]: W1216 12:30:11.770756 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.770776 kubelet[3449]: E1216 12:30:11.770775 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.771300 kubelet[3449]: E1216 12:30:11.771286 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.771379 kubelet[3449]: W1216 12:30:11.771368 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.771471 kubelet[3449]: E1216 12:30:11.771440 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.771714 kubelet[3449]: E1216 12:30:11.771702 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.771875 kubelet[3449]: W1216 12:30:11.771762 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.771875 kubelet[3449]: E1216 12:30:11.771776 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.772083 kubelet[3449]: E1216 12:30:11.772073 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.772161 kubelet[3449]: W1216 12:30:11.772151 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.772220 kubelet[3449]: E1216 12:30:11.772209 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.772609 kubelet[3449]: E1216 12:30:11.772500 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.772609 kubelet[3449]: W1216 12:30:11.772512 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.772609 kubelet[3449]: E1216 12:30:11.772521 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.772809 systemd[1]: Started cri-containerd-9a0bcda73e9f1dd506cdf63aceb0aad7bc12603d98bf73697ebcdd71b86eee52.scope - libcontainer container 9a0bcda73e9f1dd506cdf63aceb0aad7bc12603d98bf73697ebcdd71b86eee52. Dec 16 12:30:11.773425 kubelet[3449]: E1216 12:30:11.773267 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.773425 kubelet[3449]: W1216 12:30:11.773281 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.773611 kubelet[3449]: E1216 12:30:11.773293 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.774402 kubelet[3449]: E1216 12:30:11.774103 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.774402 kubelet[3449]: W1216 12:30:11.774116 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.774402 kubelet[3449]: E1216 12:30:11.774127 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.774592 kubelet[3449]: E1216 12:30:11.774551 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.774679 kubelet[3449]: W1216 12:30:11.774667 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.774849 kubelet[3449]: E1216 12:30:11.774727 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.775430 kubelet[3449]: E1216 12:30:11.775173 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.775430 kubelet[3449]: W1216 12:30:11.775341 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.775430 kubelet[3449]: E1216 12:30:11.775357 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.775796 kubelet[3449]: E1216 12:30:11.775781 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.775974 kubelet[3449]: W1216 12:30:11.775872 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.775974 kubelet[3449]: E1216 12:30:11.775888 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.776203 kubelet[3449]: E1216 12:30:11.776189 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.776440 kubelet[3449]: W1216 12:30:11.776347 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.776440 kubelet[3449]: E1216 12:30:11.776366 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.776817 kubelet[3449]: E1216 12:30:11.776794 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.776817 kubelet[3449]: W1216 12:30:11.776812 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.776900 kubelet[3449]: E1216 12:30:11.776824 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.777076 kubelet[3449]: I1216 12:30:11.777014 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8dc2a4ce-1cc0-4206-a57c-f0513b577cd6-registration-dir\") pod \"csi-node-driver-fbpsm\" (UID: \"8dc2a4ce-1cc0-4206-a57c-f0513b577cd6\") " pod="calico-system/csi-node-driver-fbpsm" Dec 16 12:30:11.777327 kubelet[3449]: E1216 12:30:11.777232 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.777327 kubelet[3449]: W1216 12:30:11.777243 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.777327 kubelet[3449]: E1216 12:30:11.777254 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.777588 kubelet[3449]: E1216 12:30:11.777524 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.777588 kubelet[3449]: W1216 12:30:11.777536 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.777588 kubelet[3449]: E1216 12:30:11.777546 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.778013 kubelet[3449]: E1216 12:30:11.777991 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.778013 kubelet[3449]: W1216 12:30:11.778010 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.778085 kubelet[3449]: E1216 12:30:11.778022 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.778085 kubelet[3449]: I1216 12:30:11.778042 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8dc2a4ce-1cc0-4206-a57c-f0513b577cd6-socket-dir\") pod \"csi-node-driver-fbpsm\" (UID: \"8dc2a4ce-1cc0-4206-a57c-f0513b577cd6\") " pod="calico-system/csi-node-driver-fbpsm" Dec 16 12:30:11.778531 kubelet[3449]: E1216 12:30:11.778514 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.778531 kubelet[3449]: W1216 12:30:11.778530 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.778668 kubelet[3449]: E1216 12:30:11.778541 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.778668 kubelet[3449]: I1216 12:30:11.778584 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8dc2a4ce-1cc0-4206-a57c-f0513b577cd6-varrun\") pod \"csi-node-driver-fbpsm\" (UID: \"8dc2a4ce-1cc0-4206-a57c-f0513b577cd6\") " pod="calico-system/csi-node-driver-fbpsm" Dec 16 12:30:11.779086 kubelet[3449]: E1216 12:30:11.779061 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.779086 kubelet[3449]: W1216 12:30:11.779079 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.779240 kubelet[3449]: E1216 12:30:11.779091 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.779240 kubelet[3449]: I1216 12:30:11.779218 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8dc2a4ce-1cc0-4206-a57c-f0513b577cd6-kubelet-dir\") pod \"csi-node-driver-fbpsm\" (UID: \"8dc2a4ce-1cc0-4206-a57c-f0513b577cd6\") " pod="calico-system/csi-node-driver-fbpsm" Dec 16 12:30:11.779587 kubelet[3449]: E1216 12:30:11.779384 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.779587 kubelet[3449]: W1216 12:30:11.779397 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.779587 kubelet[3449]: E1216 12:30:11.779408 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.780098 kubelet[3449]: E1216 12:30:11.779912 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.780098 kubelet[3449]: W1216 12:30:11.779943 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.780098 kubelet[3449]: E1216 12:30:11.779957 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.780494 kubelet[3449]: E1216 12:30:11.780440 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.780855 kubelet[3449]: W1216 12:30:11.780768 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.780855 kubelet[3449]: E1216 12:30:11.780789 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.781291 kubelet[3449]: E1216 12:30:11.781140 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.781291 kubelet[3449]: W1216 12:30:11.781159 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.781291 kubelet[3449]: E1216 12:30:11.781170 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.781678 kubelet[3449]: E1216 12:30:11.781631 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.782113 kubelet[3449]: W1216 12:30:11.781839 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.782113 kubelet[3449]: E1216 12:30:11.781858 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.782113 kubelet[3449]: I1216 12:30:11.781990 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgjdb\" (UniqueName: \"kubernetes.io/projected/8dc2a4ce-1cc0-4206-a57c-f0513b577cd6-kube-api-access-pgjdb\") pod \"csi-node-driver-fbpsm\" (UID: \"8dc2a4ce-1cc0-4206-a57c-f0513b577cd6\") " pod="calico-system/csi-node-driver-fbpsm" Dec 16 12:30:11.782532 kubelet[3449]: E1216 12:30:11.782517 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.782799 kubelet[3449]: W1216 12:30:11.782608 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.782799 kubelet[3449]: E1216 12:30:11.782625 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.783087 kubelet[3449]: E1216 12:30:11.783072 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.783148 kubelet[3449]: W1216 12:30:11.783138 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.783203 kubelet[3449]: E1216 12:30:11.783185 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.783577 kubelet[3449]: E1216 12:30:11.783544 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.783815 kubelet[3449]: W1216 12:30:11.783667 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.783815 kubelet[3449]: E1216 12:30:11.783686 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.784100 kubelet[3449]: E1216 12:30:11.784079 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.784447 kubelet[3449]: W1216 12:30:11.784188 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.784447 kubelet[3449]: E1216 12:30:11.784204 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.805569 containerd[1921]: time="2025-12-16T12:30:11.805367768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-t46c2,Uid:bbd55a1c-545c-4470-808a-79cec992eefd,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a0bcda73e9f1dd506cdf63aceb0aad7bc12603d98bf73697ebcdd71b86eee52\"" Dec 16 12:30:11.882791 kubelet[3449]: E1216 12:30:11.882670 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.882791 kubelet[3449]: W1216 12:30:11.882697 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.882791 kubelet[3449]: E1216 12:30:11.882717 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.883207 kubelet[3449]: E1216 12:30:11.883145 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.883207 kubelet[3449]: W1216 12:30:11.883158 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.883207 kubelet[3449]: E1216 12:30:11.883169 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.883356 kubelet[3449]: E1216 12:30:11.883334 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.883356 kubelet[3449]: W1216 12:30:11.883350 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.883500 kubelet[3449]: E1216 12:30:11.883363 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.883500 kubelet[3449]: E1216 12:30:11.883481 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.883500 kubelet[3449]: W1216 12:30:11.883487 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.883500 kubelet[3449]: E1216 12:30:11.883494 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.883649 kubelet[3449]: E1216 12:30:11.883604 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.883649 kubelet[3449]: W1216 12:30:11.883609 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.883649 kubelet[3449]: E1216 12:30:11.883615 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.883775 kubelet[3449]: E1216 12:30:11.883765 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.883775 kubelet[3449]: W1216 12:30:11.883772 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.883818 kubelet[3449]: E1216 12:30:11.883779 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.883914 kubelet[3449]: E1216 12:30:11.883901 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.883914 kubelet[3449]: W1216 12:30:11.883910 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.883960 kubelet[3449]: E1216 12:30:11.883919 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.884050 kubelet[3449]: E1216 12:30:11.884034 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.884050 kubelet[3449]: W1216 12:30:11.884041 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.884050 kubelet[3449]: E1216 12:30:11.884047 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.884301 kubelet[3449]: E1216 12:30:11.884291 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.884410 kubelet[3449]: W1216 12:30:11.884347 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.884410 kubelet[3449]: E1216 12:30:11.884363 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.884694 kubelet[3449]: E1216 12:30:11.884631 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.884694 kubelet[3449]: W1216 12:30:11.884642 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.884694 kubelet[3449]: E1216 12:30:11.884652 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.885021 kubelet[3449]: E1216 12:30:11.884924 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.885021 kubelet[3449]: W1216 12:30:11.884936 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.885021 kubelet[3449]: E1216 12:30:11.884946 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.885319 kubelet[3449]: E1216 12:30:11.885245 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.885319 kubelet[3449]: W1216 12:30:11.885257 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.885319 kubelet[3449]: E1216 12:30:11.885266 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.885640 kubelet[3449]: E1216 12:30:11.885549 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.885806 kubelet[3449]: W1216 12:30:11.885732 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.885806 kubelet[3449]: E1216 12:30:11.885749 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.886088 kubelet[3449]: E1216 12:30:11.885999 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.886088 kubelet[3449]: W1216 12:30:11.886010 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.886088 kubelet[3449]: E1216 12:30:11.886020 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.886282 kubelet[3449]: E1216 12:30:11.886217 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.886282 kubelet[3449]: W1216 12:30:11.886227 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.886282 kubelet[3449]: E1216 12:30:11.886237 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.886553 kubelet[3449]: E1216 12:30:11.886494 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.886553 kubelet[3449]: W1216 12:30:11.886505 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.886553 kubelet[3449]: E1216 12:30:11.886514 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.886896 kubelet[3449]: E1216 12:30:11.886812 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.886896 kubelet[3449]: W1216 12:30:11.886824 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.886896 kubelet[3449]: E1216 12:30:11.886833 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.887254 kubelet[3449]: E1216 12:30:11.887111 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.887254 kubelet[3449]: W1216 12:30:11.887126 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.887254 kubelet[3449]: E1216 12:30:11.887135 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.887334 kubelet[3449]: E1216 12:30:11.887318 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.887334 kubelet[3449]: W1216 12:30:11.887327 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.887375 kubelet[3449]: E1216 12:30:11.887336 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.887448 kubelet[3449]: E1216 12:30:11.887425 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.887448 kubelet[3449]: W1216 12:30:11.887436 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.887448 kubelet[3449]: E1216 12:30:11.887443 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.887526 kubelet[3449]: E1216 12:30:11.887515 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.887526 kubelet[3449]: W1216 12:30:11.887522 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.887704 kubelet[3449]: E1216 12:30:11.887529 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.887932 kubelet[3449]: E1216 12:30:11.887913 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.887932 kubelet[3449]: W1216 12:30:11.887926 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.887932 kubelet[3449]: E1216 12:30:11.887935 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.888158 kubelet[3449]: E1216 12:30:11.888139 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.888158 kubelet[3449]: W1216 12:30:11.888148 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.888158 kubelet[3449]: E1216 12:30:11.888156 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.888273 kubelet[3449]: E1216 12:30:11.888251 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.888273 kubelet[3449]: W1216 12:30:11.888262 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.888273 kubelet[3449]: E1216 12:30:11.888268 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.888478 kubelet[3449]: E1216 12:30:11.888400 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.888478 kubelet[3449]: W1216 12:30:11.888406 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.888478 kubelet[3449]: E1216 12:30:11.888412 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:11.894648 kubelet[3449]: E1216 12:30:11.894626 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:11.894648 kubelet[3449]: W1216 12:30:11.894641 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:11.894648 kubelet[3449]: E1216 12:30:11.894651 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.011149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount642402072.mount: Deactivated successfully. Dec 16 12:30:13.426920 containerd[1921]: time="2025-12-16T12:30:13.426868743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:13.429519 containerd[1921]: time="2025-12-16T12:30:13.429490417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Dec 16 12:30:13.432997 containerd[1921]: time="2025-12-16T12:30:13.432950121Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:13.437077 containerd[1921]: time="2025-12-16T12:30:13.437027515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:13.437592 containerd[1921]: time="2025-12-16T12:30:13.437289507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.784051908s" Dec 16 12:30:13.437592 containerd[1921]: time="2025-12-16T12:30:13.437320115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:30:13.439674 containerd[1921]: time="2025-12-16T12:30:13.439655429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:30:13.455438 containerd[1921]: time="2025-12-16T12:30:13.455379252Z" level=info msg="CreateContainer within sandbox \"7a5a10a7dc024c640daa6536b75c76aa57e58310f778d16f2fc5afab6ed7bb6b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:30:13.475184 containerd[1921]: time="2025-12-16T12:30:13.475135492Z" level=info msg="Container 6ab431bdf7eab1996e474266dc01803bbae6607e5208714f18868fcc5a3c822e: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:13.494767 containerd[1921]: time="2025-12-16T12:30:13.494718959Z" level=info msg="CreateContainer within sandbox \"7a5a10a7dc024c640daa6536b75c76aa57e58310f778d16f2fc5afab6ed7bb6b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6ab431bdf7eab1996e474266dc01803bbae6607e5208714f18868fcc5a3c822e\"" Dec 16 12:30:13.495852 containerd[1921]: time="2025-12-16T12:30:13.495577871Z" level=info msg="StartContainer for \"6ab431bdf7eab1996e474266dc01803bbae6607e5208714f18868fcc5a3c822e\"" Dec 16 12:30:13.496775 containerd[1921]: time="2025-12-16T12:30:13.496751472Z" level=info msg="connecting to shim 6ab431bdf7eab1996e474266dc01803bbae6607e5208714f18868fcc5a3c822e" address="unix:///run/containerd/s/563f98db2dd7503f5fdaa993861bd1fcbec29a356e077499e553663d1b078025" protocol=ttrpc version=3 Dec 16 12:30:13.514711 systemd[1]: Started cri-containerd-6ab431bdf7eab1996e474266dc01803bbae6607e5208714f18868fcc5a3c822e.scope - libcontainer container 6ab431bdf7eab1996e474266dc01803bbae6607e5208714f18868fcc5a3c822e. Dec 16 12:30:13.559969 containerd[1921]: time="2025-12-16T12:30:13.559900012Z" level=info msg="StartContainer for \"6ab431bdf7eab1996e474266dc01803bbae6607e5208714f18868fcc5a3c822e\" returns successfully" Dec 16 12:30:13.674855 kubelet[3449]: E1216 12:30:13.674152 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:30:13.789237 kubelet[3449]: E1216 12:30:13.789154 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.789237 kubelet[3449]: W1216 12:30:13.789176 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.789237 kubelet[3449]: E1216 12:30:13.789198 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.789765 kubelet[3449]: E1216 12:30:13.789671 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.789765 kubelet[3449]: W1216 12:30:13.789684 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.789765 kubelet[3449]: E1216 12:30:13.789725 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.790023 kubelet[3449]: E1216 12:30:13.790011 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.790090 kubelet[3449]: W1216 12:30:13.790079 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.790136 kubelet[3449]: E1216 12:30:13.790125 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.790391 kubelet[3449]: E1216 12:30:13.790329 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.790391 kubelet[3449]: W1216 12:30:13.790340 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.790391 kubelet[3449]: E1216 12:30:13.790349 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.790697 kubelet[3449]: E1216 12:30:13.790641 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.790697 kubelet[3449]: W1216 12:30:13.790653 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.790697 kubelet[3449]: E1216 12:30:13.790663 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.790973 kubelet[3449]: E1216 12:30:13.790920 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.790973 kubelet[3449]: W1216 12:30:13.790931 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.790973 kubelet[3449]: E1216 12:30:13.790940 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.791285 kubelet[3449]: E1216 12:30:13.791218 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.791285 kubelet[3449]: W1216 12:30:13.791231 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.791285 kubelet[3449]: E1216 12:30:13.791240 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.791585 kubelet[3449]: E1216 12:30:13.791545 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.791773 kubelet[3449]: W1216 12:30:13.791671 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.791773 kubelet[3449]: E1216 12:30:13.791689 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.791988 kubelet[3449]: E1216 12:30:13.791978 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.792083 kubelet[3449]: W1216 12:30:13.792036 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.792083 kubelet[3449]: E1216 12:30:13.792050 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.792281 kubelet[3449]: E1216 12:30:13.792269 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.792385 kubelet[3449]: W1216 12:30:13.792336 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.792385 kubelet[3449]: E1216 12:30:13.792352 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.792642 kubelet[3449]: E1216 12:30:13.792587 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.792642 kubelet[3449]: W1216 12:30:13.792598 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.792642 kubelet[3449]: E1216 12:30:13.792607 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.792868 kubelet[3449]: E1216 12:30:13.792856 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.793271 kubelet[3449]: W1216 12:30:13.793178 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.793271 kubelet[3449]: E1216 12:30:13.793199 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.793807 kubelet[3449]: E1216 12:30:13.793623 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.793807 kubelet[3449]: W1216 12:30:13.793635 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.793807 kubelet[3449]: E1216 12:30:13.793645 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.794076 kubelet[3449]: E1216 12:30:13.794063 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.794359 kubelet[3449]: W1216 12:30:13.794193 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.794359 kubelet[3449]: E1216 12:30:13.794221 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.794825 kubelet[3449]: E1216 12:30:13.794715 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.794825 kubelet[3449]: W1216 12:30:13.794728 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.794825 kubelet[3449]: E1216 12:30:13.794738 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.799296 kubelet[3449]: E1216 12:30:13.799197 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.799296 kubelet[3449]: W1216 12:30:13.799259 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.799296 kubelet[3449]: E1216 12:30:13.799273 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.799966 kubelet[3449]: E1216 12:30:13.799892 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.799966 kubelet[3449]: W1216 12:30:13.799919 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.799966 kubelet[3449]: E1216 12:30:13.799931 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.800617 kubelet[3449]: E1216 12:30:13.800600 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.800822 kubelet[3449]: W1216 12:30:13.800790 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.800822 kubelet[3449]: E1216 12:30:13.800809 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.801390 kubelet[3449]: E1216 12:30:13.801355 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.801390 kubelet[3449]: W1216 12:30:13.801367 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.801390 kubelet[3449]: E1216 12:30:13.801378 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.801831 kubelet[3449]: E1216 12:30:13.801787 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.801831 kubelet[3449]: W1216 12:30:13.801804 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.801831 kubelet[3449]: E1216 12:30:13.801814 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.802159 kubelet[3449]: E1216 12:30:13.802128 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.802159 kubelet[3449]: W1216 12:30:13.802138 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.802159 kubelet[3449]: E1216 12:30:13.802148 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.802699 kubelet[3449]: E1216 12:30:13.802686 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.802786 kubelet[3449]: W1216 12:30:13.802759 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.802786 kubelet[3449]: E1216 12:30:13.802773 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.803148 kubelet[3449]: E1216 12:30:13.803137 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.803219 kubelet[3449]: W1216 12:30:13.803209 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.803266 kubelet[3449]: E1216 12:30:13.803257 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.803623 kubelet[3449]: E1216 12:30:13.803542 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.803623 kubelet[3449]: W1216 12:30:13.803596 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.803623 kubelet[3449]: E1216 12:30:13.803610 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.803975 kubelet[3449]: E1216 12:30:13.803944 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.803975 kubelet[3449]: W1216 12:30:13.803956 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.803975 kubelet[3449]: E1216 12:30:13.803965 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.804274 kubelet[3449]: E1216 12:30:13.804244 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.804274 kubelet[3449]: W1216 12:30:13.804254 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.804274 kubelet[3449]: E1216 12:30:13.804264 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.804592 kubelet[3449]: E1216 12:30:13.804549 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.804689 kubelet[3449]: W1216 12:30:13.804647 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.804689 kubelet[3449]: E1216 12:30:13.804678 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.804967 kubelet[3449]: E1216 12:30:13.804953 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.805080 kubelet[3449]: W1216 12:30:13.805023 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.805080 kubelet[3449]: E1216 12:30:13.805038 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.805308 kubelet[3449]: E1216 12:30:13.805297 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.805391 kubelet[3449]: W1216 12:30:13.805367 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.805391 kubelet[3449]: E1216 12:30:13.805381 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.805654 kubelet[3449]: E1216 12:30:13.805624 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.805654 kubelet[3449]: W1216 12:30:13.805635 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.805654 kubelet[3449]: E1216 12:30:13.805645 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.806241 kubelet[3449]: E1216 12:30:13.806208 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.806241 kubelet[3449]: W1216 12:30:13.806220 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.806241 kubelet[3449]: E1216 12:30:13.806230 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.806675 kubelet[3449]: E1216 12:30:13.806645 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.806675 kubelet[3449]: W1216 12:30:13.806658 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.806834 kubelet[3449]: E1216 12:30:13.806761 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:13.807168 kubelet[3449]: E1216 12:30:13.807104 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:13.807168 kubelet[3449]: W1216 12:30:13.807116 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:13.807280 kubelet[3449]: E1216 12:30:13.807244 3449 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:14.570682 containerd[1921]: time="2025-12-16T12:30:14.570608942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:14.575025 containerd[1921]: time="2025-12-16T12:30:14.574853130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Dec 16 12:30:14.578860 containerd[1921]: time="2025-12-16T12:30:14.578812869Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:14.583662 containerd[1921]: time="2025-12-16T12:30:14.583604863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:14.584194 containerd[1921]: time="2025-12-16T12:30:14.583918728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.143804623s" Dec 16 12:30:14.584194 containerd[1921]: time="2025-12-16T12:30:14.583950065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:30:14.593273 containerd[1921]: time="2025-12-16T12:30:14.593243725Z" level=info msg="CreateContainer within sandbox \"9a0bcda73e9f1dd506cdf63aceb0aad7bc12603d98bf73697ebcdd71b86eee52\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:30:14.617534 containerd[1921]: time="2025-12-16T12:30:14.615828370Z" level=info msg="Container 772c20dc04127a7d337c9556c625d1d79b2fa7088d40626db0fe6502ca680e1c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:14.641980 containerd[1921]: time="2025-12-16T12:30:14.641904654Z" level=info msg="CreateContainer within sandbox \"9a0bcda73e9f1dd506cdf63aceb0aad7bc12603d98bf73697ebcdd71b86eee52\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"772c20dc04127a7d337c9556c625d1d79b2fa7088d40626db0fe6502ca680e1c\"" Dec 16 12:30:14.643089 containerd[1921]: time="2025-12-16T12:30:14.643030853Z" level=info msg="StartContainer for \"772c20dc04127a7d337c9556c625d1d79b2fa7088d40626db0fe6502ca680e1c\"" Dec 16 12:30:14.644450 containerd[1921]: time="2025-12-16T12:30:14.644372353Z" level=info msg="connecting to shim 772c20dc04127a7d337c9556c625d1d79b2fa7088d40626db0fe6502ca680e1c" address="unix:///run/containerd/s/37ebb3a46bb01d6b1323750f564c0f8ecfc4455e54b2cb098d6fe26c77673607" protocol=ttrpc version=3 Dec 16 12:30:14.668717 systemd[1]: Started cri-containerd-772c20dc04127a7d337c9556c625d1d79b2fa7088d40626db0fe6502ca680e1c.scope - libcontainer container 772c20dc04127a7d337c9556c625d1d79b2fa7088d40626db0fe6502ca680e1c. Dec 16 12:30:14.734272 containerd[1921]: time="2025-12-16T12:30:14.733940289Z" level=info msg="StartContainer for \"772c20dc04127a7d337c9556c625d1d79b2fa7088d40626db0fe6502ca680e1c\" returns successfully" Dec 16 12:30:14.740240 systemd[1]: cri-containerd-772c20dc04127a7d337c9556c625d1d79b2fa7088d40626db0fe6502ca680e1c.scope: Deactivated successfully. Dec 16 12:30:14.743741 containerd[1921]: time="2025-12-16T12:30:14.743647569Z" level=info msg="received container exit event container_id:\"772c20dc04127a7d337c9556c625d1d79b2fa7088d40626db0fe6502ca680e1c\" id:\"772c20dc04127a7d337c9556c625d1d79b2fa7088d40626db0fe6502ca680e1c\" pid:4122 exited_at:{seconds:1765888214 nanos:743107074}" Dec 16 12:30:14.747527 kubelet[3449]: I1216 12:30:14.747497 3449 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:30:14.770150 kubelet[3449]: I1216 12:30:14.770096 3449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f77b5fdf9-7bl57" podStartSLOduration=1.984420381 podStartE2EDuration="3.769826208s" podCreationTimestamp="2025-12-16 12:30:11 +0000 UTC" firstStartedPulling="2025-12-16 12:30:11.652961502 +0000 UTC m=+22.069342934" lastFinishedPulling="2025-12-16 12:30:13.438367329 +0000 UTC m=+23.854748761" observedRunningTime="2025-12-16 12:30:13.758397797 +0000 UTC m=+24.174779229" watchObservedRunningTime="2025-12-16 12:30:14.769826208 +0000 UTC m=+25.186207640" Dec 16 12:30:14.775505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-772c20dc04127a7d337c9556c625d1d79b2fa7088d40626db0fe6502ca680e1c-rootfs.mount: Deactivated successfully. Dec 16 12:30:15.674551 kubelet[3449]: E1216 12:30:15.674180 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:30:16.756420 containerd[1921]: time="2025-12-16T12:30:16.755986240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:30:17.674709 kubelet[3449]: E1216 12:30:17.674389 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:30:18.998964 containerd[1921]: time="2025-12-16T12:30:18.998908229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:19.002078 containerd[1921]: time="2025-12-16T12:30:19.002044778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 16 12:30:19.005676 containerd[1921]: time="2025-12-16T12:30:19.005648476Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:19.009985 containerd[1921]: time="2025-12-16T12:30:19.009951425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:19.010393 containerd[1921]: time="2025-12-16T12:30:19.010365028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.253225348s" Dec 16 12:30:19.010428 containerd[1921]: time="2025-12-16T12:30:19.010396341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:30:19.017869 containerd[1921]: time="2025-12-16T12:30:19.017839455Z" level=info msg="CreateContainer within sandbox \"9a0bcda73e9f1dd506cdf63aceb0aad7bc12603d98bf73697ebcdd71b86eee52\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:30:19.040135 containerd[1921]: time="2025-12-16T12:30:19.038676749Z" level=info msg="Container b601c8ed2f221381ee77e8ba05402a21f8818a31ff536aeeb78dea869eb51506: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:19.059971 containerd[1921]: time="2025-12-16T12:30:19.059886093Z" level=info msg="CreateContainer within sandbox \"9a0bcda73e9f1dd506cdf63aceb0aad7bc12603d98bf73697ebcdd71b86eee52\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b601c8ed2f221381ee77e8ba05402a21f8818a31ff536aeeb78dea869eb51506\"" Dec 16 12:30:19.060555 containerd[1921]: time="2025-12-16T12:30:19.060500189Z" level=info msg="StartContainer for \"b601c8ed2f221381ee77e8ba05402a21f8818a31ff536aeeb78dea869eb51506\"" Dec 16 12:30:19.063423 containerd[1921]: time="2025-12-16T12:30:19.063390316Z" level=info msg="connecting to shim b601c8ed2f221381ee77e8ba05402a21f8818a31ff536aeeb78dea869eb51506" address="unix:///run/containerd/s/37ebb3a46bb01d6b1323750f564c0f8ecfc4455e54b2cb098d6fe26c77673607" protocol=ttrpc version=3 Dec 16 12:30:19.079711 systemd[1]: Started cri-containerd-b601c8ed2f221381ee77e8ba05402a21f8818a31ff536aeeb78dea869eb51506.scope - libcontainer container b601c8ed2f221381ee77e8ba05402a21f8818a31ff536aeeb78dea869eb51506. Dec 16 12:30:19.140022 containerd[1921]: time="2025-12-16T12:30:19.139928066Z" level=info msg="StartContainer for \"b601c8ed2f221381ee77e8ba05402a21f8818a31ff536aeeb78dea869eb51506\" returns successfully" Dec 16 12:30:19.675804 kubelet[3449]: E1216 12:30:19.675746 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:30:20.247109 containerd[1921]: time="2025-12-16T12:30:20.247059735Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:30:20.249314 systemd[1]: cri-containerd-b601c8ed2f221381ee77e8ba05402a21f8818a31ff536aeeb78dea869eb51506.scope: Deactivated successfully. Dec 16 12:30:20.250120 systemd[1]: cri-containerd-b601c8ed2f221381ee77e8ba05402a21f8818a31ff536aeeb78dea869eb51506.scope: Consumed 326ms CPU time, 186.9M memory peak, 165.9M written to disk. Dec 16 12:30:20.251653 containerd[1921]: time="2025-12-16T12:30:20.251490352Z" level=info msg="received container exit event container_id:\"b601c8ed2f221381ee77e8ba05402a21f8818a31ff536aeeb78dea869eb51506\" id:\"b601c8ed2f221381ee77e8ba05402a21f8818a31ff536aeeb78dea869eb51506\" pid:4183 exited_at:{seconds:1765888220 nanos:251299211}" Dec 16 12:30:20.254899 kubelet[3449]: I1216 12:30:20.254879 3449 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 16 12:30:20.277278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b601c8ed2f221381ee77e8ba05402a21f8818a31ff536aeeb78dea869eb51506-rootfs.mount: Deactivated successfully. Dec 16 12:30:21.116472 systemd[1]: Created slice kubepods-burstable-pod57fe338c_87eb_41e3_8a54_022000644fe1.slice - libcontainer container kubepods-burstable-pod57fe338c_87eb_41e3_8a54_022000644fe1.slice. Dec 16 12:30:21.132262 systemd[1]: Created slice kubepods-burstable-pod1e1bdb89_484a_4c6c_abf3_1a7dd9af5720.slice - libcontainer container kubepods-burstable-pod1e1bdb89_484a_4c6c_abf3_1a7dd9af5720.slice. Dec 16 12:30:21.145732 systemd[1]: Created slice kubepods-besteffort-pod12a67fa6_b880_41bc_a39b_0d6c266384bf.slice - libcontainer container kubepods-besteffort-pod12a67fa6_b880_41bc_a39b_0d6c266384bf.slice. Dec 16 12:30:21.149257 kubelet[3449]: I1216 12:30:21.148916 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e1bdb89-484a-4c6c-abf3-1a7dd9af5720-config-volume\") pod \"coredns-66bc5c9577-22b49\" (UID: \"1e1bdb89-484a-4c6c-abf3-1a7dd9af5720\") " pod="kube-system/coredns-66bc5c9577-22b49" Dec 16 12:30:21.150031 kubelet[3449]: I1216 12:30:21.149976 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57fe338c-87eb-41e3-8a54-022000644fe1-config-volume\") pod \"coredns-66bc5c9577-d45bz\" (UID: \"57fe338c-87eb-41e3-8a54-022000644fe1\") " pod="kube-system/coredns-66bc5c9577-d45bz" Dec 16 12:30:21.150031 kubelet[3449]: I1216 12:30:21.150001 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw7tg\" (UniqueName: \"kubernetes.io/projected/12a67fa6-b880-41bc-a39b-0d6c266384bf-kube-api-access-mw7tg\") pod \"calico-apiserver-6879cffd7b-mkgxz\" (UID: \"12a67fa6-b880-41bc-a39b-0d6c266384bf\") " pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" Dec 16 12:30:21.150031 kubelet[3449]: I1216 12:30:21.150014 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4dc67b6f-ad58-4f41-a48a-50245539bb0d-goldmane-key-pair\") pod \"goldmane-7c778bb748-d9x7j\" (UID: \"4dc67b6f-ad58-4f41-a48a-50245539bb0d\") " pod="calico-system/goldmane-7c778bb748-d9x7j" Dec 16 12:30:21.150235 kubelet[3449]: I1216 12:30:21.150175 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2s4w\" (UniqueName: \"kubernetes.io/projected/4dc67b6f-ad58-4f41-a48a-50245539bb0d-kube-api-access-h2s4w\") pod \"goldmane-7c778bb748-d9x7j\" (UID: \"4dc67b6f-ad58-4f41-a48a-50245539bb0d\") " pod="calico-system/goldmane-7c778bb748-d9x7j" Dec 16 12:30:21.150235 kubelet[3449]: I1216 12:30:21.150200 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55ccn\" (UniqueName: \"kubernetes.io/projected/1e1bdb89-484a-4c6c-abf3-1a7dd9af5720-kube-api-access-55ccn\") pod \"coredns-66bc5c9577-22b49\" (UID: \"1e1bdb89-484a-4c6c-abf3-1a7dd9af5720\") " pod="kube-system/coredns-66bc5c9577-22b49" Dec 16 12:30:21.150235 kubelet[3449]: I1216 12:30:21.150213 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dc67b6f-ad58-4f41-a48a-50245539bb0d-config\") pod \"goldmane-7c778bb748-d9x7j\" (UID: \"4dc67b6f-ad58-4f41-a48a-50245539bb0d\") " pod="calico-system/goldmane-7c778bb748-d9x7j" Dec 16 12:30:21.150398 kubelet[3449]: I1216 12:30:21.150331 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4hkn\" (UniqueName: \"kubernetes.io/projected/57fe338c-87eb-41e3-8a54-022000644fe1-kube-api-access-r4hkn\") pod \"coredns-66bc5c9577-d45bz\" (UID: \"57fe338c-87eb-41e3-8a54-022000644fe1\") " pod="kube-system/coredns-66bc5c9577-d45bz" Dec 16 12:30:21.150398 kubelet[3449]: I1216 12:30:21.150350 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12a67fa6-b880-41bc-a39b-0d6c266384bf-calico-apiserver-certs\") pod \"calico-apiserver-6879cffd7b-mkgxz\" (UID: \"12a67fa6-b880-41bc-a39b-0d6c266384bf\") " pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" Dec 16 12:30:21.150398 kubelet[3449]: I1216 12:30:21.150363 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4dc67b6f-ad58-4f41-a48a-50245539bb0d-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-d9x7j\" (UID: \"4dc67b6f-ad58-4f41-a48a-50245539bb0d\") " pod="calico-system/goldmane-7c778bb748-d9x7j" Dec 16 12:30:21.157466 systemd[1]: Created slice kubepods-besteffort-pod4dc67b6f_ad58_4f41_a48a_50245539bb0d.slice - libcontainer container kubepods-besteffort-pod4dc67b6f_ad58_4f41_a48a_50245539bb0d.slice. Dec 16 12:30:21.163213 systemd[1]: Created slice kubepods-besteffort-pod36d9148b_0dfc_4863_9a1f_1111dc89554f.slice - libcontainer container kubepods-besteffort-pod36d9148b_0dfc_4863_9a1f_1111dc89554f.slice. Dec 16 12:30:21.169529 systemd[1]: Created slice kubepods-besteffort-pod746d77af_37dd_4af0_98d6_e8786f6ddd62.slice - libcontainer container kubepods-besteffort-pod746d77af_37dd_4af0_98d6_e8786f6ddd62.slice. Dec 16 12:30:21.175105 systemd[1]: Created slice kubepods-besteffort-pod8aede75b_84c4_4230_866b_6bdfa406b3b1.slice - libcontainer container kubepods-besteffort-pod8aede75b_84c4_4230_866b_6bdfa406b3b1.slice. Dec 16 12:30:21.251520 kubelet[3449]: I1216 12:30:21.251453 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8aede75b-84c4-4230-866b-6bdfa406b3b1-calico-apiserver-certs\") pod \"calico-apiserver-6879cffd7b-88t7g\" (UID: \"8aede75b-84c4-4230-866b-6bdfa406b3b1\") " pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" Dec 16 12:30:21.251520 kubelet[3449]: I1216 12:30:21.251508 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v2kn\" (UniqueName: \"kubernetes.io/projected/8aede75b-84c4-4230-866b-6bdfa406b3b1-kube-api-access-2v2kn\") pod \"calico-apiserver-6879cffd7b-88t7g\" (UID: \"8aede75b-84c4-4230-866b-6bdfa406b3b1\") " pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" Dec 16 12:30:21.251520 kubelet[3449]: I1216 12:30:21.251520 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/36d9148b-0dfc-4863-9a1f-1111dc89554f-whisker-backend-key-pair\") pod \"whisker-6c566dbd88-wd4lt\" (UID: \"36d9148b-0dfc-4863-9a1f-1111dc89554f\") " pod="calico-system/whisker-6c566dbd88-wd4lt" Dec 16 12:30:21.251520 kubelet[3449]: I1216 12:30:21.251530 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36d9148b-0dfc-4863-9a1f-1111dc89554f-whisker-ca-bundle\") pod \"whisker-6c566dbd88-wd4lt\" (UID: \"36d9148b-0dfc-4863-9a1f-1111dc89554f\") " pod="calico-system/whisker-6c566dbd88-wd4lt" Dec 16 12:30:21.251520 kubelet[3449]: I1216 12:30:21.251541 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpmbd\" (UniqueName: \"kubernetes.io/projected/36d9148b-0dfc-4863-9a1f-1111dc89554f-kube-api-access-cpmbd\") pod \"whisker-6c566dbd88-wd4lt\" (UID: \"36d9148b-0dfc-4863-9a1f-1111dc89554f\") " pod="calico-system/whisker-6c566dbd88-wd4lt" Dec 16 12:30:21.251838 kubelet[3449]: I1216 12:30:21.251550 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/746d77af-37dd-4af0-98d6-e8786f6ddd62-tigera-ca-bundle\") pod \"calico-kube-controllers-544fc6cbc8-s5srh\" (UID: \"746d77af-37dd-4af0-98d6-e8786f6ddd62\") " pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" Dec 16 12:30:21.251838 kubelet[3449]: I1216 12:30:21.251574 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn994\" (UniqueName: \"kubernetes.io/projected/746d77af-37dd-4af0-98d6-e8786f6ddd62-kube-api-access-zn994\") pod \"calico-kube-controllers-544fc6cbc8-s5srh\" (UID: \"746d77af-37dd-4af0-98d6-e8786f6ddd62\") " pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" Dec 16 12:30:21.430925 containerd[1921]: time="2025-12-16T12:30:21.430813157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d45bz,Uid:57fe338c-87eb-41e3-8a54-022000644fe1,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:21.448446 containerd[1921]: time="2025-12-16T12:30:21.448038881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-22b49,Uid:1e1bdb89-484a-4c6c-abf3-1a7dd9af5720,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:21.462690 containerd[1921]: time="2025-12-16T12:30:21.462647557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6879cffd7b-mkgxz,Uid:12a67fa6-b880-41bc-a39b-0d6c266384bf,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:30:21.468313 containerd[1921]: time="2025-12-16T12:30:21.468166283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-d9x7j,Uid:4dc67b6f-ad58-4f41-a48a-50245539bb0d,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:21.475424 containerd[1921]: time="2025-12-16T12:30:21.475384791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c566dbd88-wd4lt,Uid:36d9148b-0dfc-4863-9a1f-1111dc89554f,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:21.483271 containerd[1921]: time="2025-12-16T12:30:21.483230556Z" level=error msg="Failed to destroy network for sandbox \"067b7cc5befd5fd0c03bfeb73b7aa68b12229b1ff82554cdb1e056c188f62b92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.484617 containerd[1921]: time="2025-12-16T12:30:21.484593065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-544fc6cbc8-s5srh,Uid:746d77af-37dd-4af0-98d6-e8786f6ddd62,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:21.511090 containerd[1921]: time="2025-12-16T12:30:21.510999942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6879cffd7b-88t7g,Uid:8aede75b-84c4-4230-866b-6bdfa406b3b1,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:30:21.511460 containerd[1921]: time="2025-12-16T12:30:21.511410017Z" level=error msg="Failed to destroy network for sandbox \"78e1bcd62639eac30b95c1b37d0e6f35855d5e5add0ecdc221c50b424c1b20d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.522250 containerd[1921]: time="2025-12-16T12:30:21.521609262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d45bz,Uid:57fe338c-87eb-41e3-8a54-022000644fe1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"067b7cc5befd5fd0c03bfeb73b7aa68b12229b1ff82554cdb1e056c188f62b92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.522828 kubelet[3449]: E1216 12:30:21.521967 3449 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"067b7cc5befd5fd0c03bfeb73b7aa68b12229b1ff82554cdb1e056c188f62b92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.522828 kubelet[3449]: E1216 12:30:21.522061 3449 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"067b7cc5befd5fd0c03bfeb73b7aa68b12229b1ff82554cdb1e056c188f62b92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-d45bz" Dec 16 12:30:21.522828 kubelet[3449]: E1216 12:30:21.522101 3449 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"067b7cc5befd5fd0c03bfeb73b7aa68b12229b1ff82554cdb1e056c188f62b92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-d45bz" Dec 16 12:30:21.522994 kubelet[3449]: E1216 12:30:21.522946 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-d45bz_kube-system(57fe338c-87eb-41e3-8a54-022000644fe1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-d45bz_kube-system(57fe338c-87eb-41e3-8a54-022000644fe1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"067b7cc5befd5fd0c03bfeb73b7aa68b12229b1ff82554cdb1e056c188f62b92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-d45bz" podUID="57fe338c-87eb-41e3-8a54-022000644fe1" Dec 16 12:30:21.535487 containerd[1921]: time="2025-12-16T12:30:21.535432334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-22b49,Uid:1e1bdb89-484a-4c6c-abf3-1a7dd9af5720,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78e1bcd62639eac30b95c1b37d0e6f35855d5e5add0ecdc221c50b424c1b20d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.535781 kubelet[3449]: E1216 12:30:21.535745 3449 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78e1bcd62639eac30b95c1b37d0e6f35855d5e5add0ecdc221c50b424c1b20d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.535845 kubelet[3449]: E1216 12:30:21.535793 3449 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78e1bcd62639eac30b95c1b37d0e6f35855d5e5add0ecdc221c50b424c1b20d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-22b49" Dec 16 12:30:21.535845 kubelet[3449]: E1216 12:30:21.535807 3449 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78e1bcd62639eac30b95c1b37d0e6f35855d5e5add0ecdc221c50b424c1b20d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-22b49" Dec 16 12:30:21.535893 kubelet[3449]: E1216 12:30:21.535852 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-22b49_kube-system(1e1bdb89-484a-4c6c-abf3-1a7dd9af5720)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-22b49_kube-system(1e1bdb89-484a-4c6c-abf3-1a7dd9af5720)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78e1bcd62639eac30b95c1b37d0e6f35855d5e5add0ecdc221c50b424c1b20d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-22b49" podUID="1e1bdb89-484a-4c6c-abf3-1a7dd9af5720" Dec 16 12:30:21.569505 containerd[1921]: time="2025-12-16T12:30:21.569444753Z" level=error msg="Failed to destroy network for sandbox \"a8798739e16a276d54b0258fa9d2cd138c49f4485cea0e75c275a02ba0f3a3fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.576870 containerd[1921]: time="2025-12-16T12:30:21.576795441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6879cffd7b-mkgxz,Uid:12a67fa6-b880-41bc-a39b-0d6c266384bf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8798739e16a276d54b0258fa9d2cd138c49f4485cea0e75c275a02ba0f3a3fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.577741 kubelet[3449]: E1216 12:30:21.577162 3449 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8798739e16a276d54b0258fa9d2cd138c49f4485cea0e75c275a02ba0f3a3fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.577741 kubelet[3449]: E1216 12:30:21.577234 3449 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8798739e16a276d54b0258fa9d2cd138c49f4485cea0e75c275a02ba0f3a3fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" Dec 16 12:30:21.577741 kubelet[3449]: E1216 12:30:21.577348 3449 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8798739e16a276d54b0258fa9d2cd138c49f4485cea0e75c275a02ba0f3a3fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" Dec 16 12:30:21.577859 kubelet[3449]: E1216 12:30:21.577423 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6879cffd7b-mkgxz_calico-apiserver(12a67fa6-b880-41bc-a39b-0d6c266384bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6879cffd7b-mkgxz_calico-apiserver(12a67fa6-b880-41bc-a39b-0d6c266384bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8798739e16a276d54b0258fa9d2cd138c49f4485cea0e75c275a02ba0f3a3fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:30:21.607821 containerd[1921]: time="2025-12-16T12:30:21.607384711Z" level=error msg="Failed to destroy network for sandbox \"1d9f27eeb6d2210754d2937fac0753b9355ee49ae5e75a628b0e6694555b6a88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.612394 containerd[1921]: time="2025-12-16T12:30:21.612358407Z" level=error msg="Failed to destroy network for sandbox \"2b615b931f966ee1787044eb17706bc13025bc113649c01a62e7be60eef340ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.614719 containerd[1921]: time="2025-12-16T12:30:21.614687142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c566dbd88-wd4lt,Uid:36d9148b-0dfc-4863-9a1f-1111dc89554f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d9f27eeb6d2210754d2937fac0753b9355ee49ae5e75a628b0e6694555b6a88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.615173 containerd[1921]: time="2025-12-16T12:30:21.615105337Z" level=error msg="Failed to destroy network for sandbox \"db17be40e137e3ffee225ae19922feacf25579d0e71b872d3fec6c3de1fe0e23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.615382 kubelet[3449]: E1216 12:30:21.615342 3449 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d9f27eeb6d2210754d2937fac0753b9355ee49ae5e75a628b0e6694555b6a88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.615443 kubelet[3449]: E1216 12:30:21.615399 3449 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d9f27eeb6d2210754d2937fac0753b9355ee49ae5e75a628b0e6694555b6a88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c566dbd88-wd4lt" Dec 16 12:30:21.615443 kubelet[3449]: E1216 12:30:21.615415 3449 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d9f27eeb6d2210754d2937fac0753b9355ee49ae5e75a628b0e6694555b6a88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c566dbd88-wd4lt" Dec 16 12:30:21.615520 kubelet[3449]: E1216 12:30:21.615464 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c566dbd88-wd4lt_calico-system(36d9148b-0dfc-4863-9a1f-1111dc89554f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c566dbd88-wd4lt_calico-system(36d9148b-0dfc-4863-9a1f-1111dc89554f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d9f27eeb6d2210754d2937fac0753b9355ee49ae5e75a628b0e6694555b6a88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c566dbd88-wd4lt" podUID="36d9148b-0dfc-4863-9a1f-1111dc89554f" Dec 16 12:30:21.618340 containerd[1921]: time="2025-12-16T12:30:21.618142804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6879cffd7b-88t7g,Uid:8aede75b-84c4-4230-866b-6bdfa406b3b1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b615b931f966ee1787044eb17706bc13025bc113649c01a62e7be60eef340ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.618969 kubelet[3449]: E1216 12:30:21.618712 3449 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b615b931f966ee1787044eb17706bc13025bc113649c01a62e7be60eef340ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.618969 kubelet[3449]: E1216 12:30:21.618770 3449 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b615b931f966ee1787044eb17706bc13025bc113649c01a62e7be60eef340ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" Dec 16 12:30:21.618969 kubelet[3449]: E1216 12:30:21.618786 3449 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b615b931f966ee1787044eb17706bc13025bc113649c01a62e7be60eef340ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" Dec 16 12:30:21.619079 kubelet[3449]: E1216 12:30:21.618850 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6879cffd7b-88t7g_calico-apiserver(8aede75b-84c4-4230-866b-6bdfa406b3b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6879cffd7b-88t7g_calico-apiserver(8aede75b-84c4-4230-866b-6bdfa406b3b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b615b931f966ee1787044eb17706bc13025bc113649c01a62e7be60eef340ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:30:21.622077 containerd[1921]: time="2025-12-16T12:30:21.622042189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-d9x7j,Uid:4dc67b6f-ad58-4f41-a48a-50245539bb0d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db17be40e137e3ffee225ae19922feacf25579d0e71b872d3fec6c3de1fe0e23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.622590 kubelet[3449]: E1216 12:30:21.622438 3449 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db17be40e137e3ffee225ae19922feacf25579d0e71b872d3fec6c3de1fe0e23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.622671 kubelet[3449]: E1216 12:30:21.622594 3449 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db17be40e137e3ffee225ae19922feacf25579d0e71b872d3fec6c3de1fe0e23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-d9x7j" Dec 16 12:30:21.622671 kubelet[3449]: E1216 12:30:21.622612 3449 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db17be40e137e3ffee225ae19922feacf25579d0e71b872d3fec6c3de1fe0e23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-d9x7j" Dec 16 12:30:21.623611 kubelet[3449]: E1216 12:30:21.622958 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-d9x7j_calico-system(4dc67b6f-ad58-4f41-a48a-50245539bb0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-d9x7j_calico-system(4dc67b6f-ad58-4f41-a48a-50245539bb0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db17be40e137e3ffee225ae19922feacf25579d0e71b872d3fec6c3de1fe0e23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:30:21.631631 containerd[1921]: time="2025-12-16T12:30:21.631591865Z" level=error msg="Failed to destroy network for sandbox \"6f2fb0950a50ac01effcf54855b3241c51d4cdf771f7544206d822fdf35f2eb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.635954 containerd[1921]: time="2025-12-16T12:30:21.635916950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-544fc6cbc8-s5srh,Uid:746d77af-37dd-4af0-98d6-e8786f6ddd62,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f2fb0950a50ac01effcf54855b3241c51d4cdf771f7544206d822fdf35f2eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.636267 kubelet[3449]: E1216 12:30:21.636161 3449 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f2fb0950a50ac01effcf54855b3241c51d4cdf771f7544206d822fdf35f2eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.636267 kubelet[3449]: E1216 12:30:21.636213 3449 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f2fb0950a50ac01effcf54855b3241c51d4cdf771f7544206d822fdf35f2eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" Dec 16 12:30:21.636267 kubelet[3449]: E1216 12:30:21.636227 3449 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f2fb0950a50ac01effcf54855b3241c51d4cdf771f7544206d822fdf35f2eb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" Dec 16 12:30:21.636418 kubelet[3449]: E1216 12:30:21.636397 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-544fc6cbc8-s5srh_calico-system(746d77af-37dd-4af0-98d6-e8786f6ddd62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-544fc6cbc8-s5srh_calico-system(746d77af-37dd-4af0-98d6-e8786f6ddd62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f2fb0950a50ac01effcf54855b3241c51d4cdf771f7544206d822fdf35f2eb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:30:21.679858 systemd[1]: Created slice kubepods-besteffort-pod8dc2a4ce_1cc0_4206_a57c_f0513b577cd6.slice - libcontainer container kubepods-besteffort-pod8dc2a4ce_1cc0_4206_a57c_f0513b577cd6.slice. Dec 16 12:30:21.689177 containerd[1921]: time="2025-12-16T12:30:21.688687639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fbpsm,Uid:8dc2a4ce-1cc0-4206-a57c-f0513b577cd6,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:21.727336 containerd[1921]: time="2025-12-16T12:30:21.727294231Z" level=error msg="Failed to destroy network for sandbox \"48c639c61cdf5259c92a6b7a6951271a181141686e9aa5f1420aa2eef842962d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.732984 containerd[1921]: time="2025-12-16T12:30:21.732929576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fbpsm,Uid:8dc2a4ce-1cc0-4206-a57c-f0513b577cd6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48c639c61cdf5259c92a6b7a6951271a181141686e9aa5f1420aa2eef842962d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.733385 kubelet[3449]: E1216 12:30:21.733331 3449 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48c639c61cdf5259c92a6b7a6951271a181141686e9aa5f1420aa2eef842962d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:21.733482 kubelet[3449]: E1216 12:30:21.733392 3449 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48c639c61cdf5259c92a6b7a6951271a181141686e9aa5f1420aa2eef842962d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fbpsm" Dec 16 12:30:21.733482 kubelet[3449]: E1216 12:30:21.733408 3449 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48c639c61cdf5259c92a6b7a6951271a181141686e9aa5f1420aa2eef842962d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fbpsm" Dec 16 12:30:21.733482 kubelet[3449]: E1216 12:30:21.733456 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fbpsm_calico-system(8dc2a4ce-1cc0-4206-a57c-f0513b577cd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fbpsm_calico-system(8dc2a4ce-1cc0-4206-a57c-f0513b577cd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48c639c61cdf5259c92a6b7a6951271a181141686e9aa5f1420aa2eef842962d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:30:21.771406 containerd[1921]: time="2025-12-16T12:30:21.771324131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:30:22.984933 kubelet[3449]: I1216 12:30:22.984696 3449 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:30:25.632968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3122729458.mount: Deactivated successfully. Dec 16 12:30:26.028512 containerd[1921]: time="2025-12-16T12:30:26.028458000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:26.034901 containerd[1921]: time="2025-12-16T12:30:26.034735470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 16 12:30:26.043523 containerd[1921]: time="2025-12-16T12:30:26.043471313Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:26.051469 containerd[1921]: time="2025-12-16T12:30:26.051395797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:26.052083 containerd[1921]: time="2025-12-16T12:30:26.051785639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.28015002s" Dec 16 12:30:26.052083 containerd[1921]: time="2025-12-16T12:30:26.051817072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:30:26.071251 containerd[1921]: time="2025-12-16T12:30:26.071201250Z" level=info msg="CreateContainer within sandbox \"9a0bcda73e9f1dd506cdf63aceb0aad7bc12603d98bf73697ebcdd71b86eee52\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:30:26.105583 containerd[1921]: time="2025-12-16T12:30:26.105511403Z" level=info msg="Container f1a73ff2efbbd94567e3c88f0babcabd1fce73430842d08418342b56ecd0fb02: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:26.129184 containerd[1921]: time="2025-12-16T12:30:26.129052880Z" level=info msg="CreateContainer within sandbox \"9a0bcda73e9f1dd506cdf63aceb0aad7bc12603d98bf73697ebcdd71b86eee52\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f1a73ff2efbbd94567e3c88f0babcabd1fce73430842d08418342b56ecd0fb02\"" Dec 16 12:30:26.129645 containerd[1921]: time="2025-12-16T12:30:26.129617944Z" level=info msg="StartContainer for \"f1a73ff2efbbd94567e3c88f0babcabd1fce73430842d08418342b56ecd0fb02\"" Dec 16 12:30:26.130903 containerd[1921]: time="2025-12-16T12:30:26.130742095Z" level=info msg="connecting to shim f1a73ff2efbbd94567e3c88f0babcabd1fce73430842d08418342b56ecd0fb02" address="unix:///run/containerd/s/37ebb3a46bb01d6b1323750f564c0f8ecfc4455e54b2cb098d6fe26c77673607" protocol=ttrpc version=3 Dec 16 12:30:26.149717 systemd[1]: Started cri-containerd-f1a73ff2efbbd94567e3c88f0babcabd1fce73430842d08418342b56ecd0fb02.scope - libcontainer container f1a73ff2efbbd94567e3c88f0babcabd1fce73430842d08418342b56ecd0fb02. Dec 16 12:30:26.207891 containerd[1921]: time="2025-12-16T12:30:26.207787697Z" level=info msg="StartContainer for \"f1a73ff2efbbd94567e3c88f0babcabd1fce73430842d08418342b56ecd0fb02\" returns successfully" Dec 16 12:30:26.373155 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:30:26.373593 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:30:26.583480 kubelet[3449]: I1216 12:30:26.583098 3449 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpmbd\" (UniqueName: \"kubernetes.io/projected/36d9148b-0dfc-4863-9a1f-1111dc89554f-kube-api-access-cpmbd\") pod \"36d9148b-0dfc-4863-9a1f-1111dc89554f\" (UID: \"36d9148b-0dfc-4863-9a1f-1111dc89554f\") " Dec 16 12:30:26.583480 kubelet[3449]: I1216 12:30:26.583149 3449 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/36d9148b-0dfc-4863-9a1f-1111dc89554f-whisker-backend-key-pair\") pod \"36d9148b-0dfc-4863-9a1f-1111dc89554f\" (UID: \"36d9148b-0dfc-4863-9a1f-1111dc89554f\") " Dec 16 12:30:26.587527 kubelet[3449]: I1216 12:30:26.586791 3449 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36d9148b-0dfc-4863-9a1f-1111dc89554f-whisker-ca-bundle\") pod \"36d9148b-0dfc-4863-9a1f-1111dc89554f\" (UID: \"36d9148b-0dfc-4863-9a1f-1111dc89554f\") " Dec 16 12:30:26.589038 kubelet[3449]: I1216 12:30:26.588065 3449 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36d9148b-0dfc-4863-9a1f-1111dc89554f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "36d9148b-0dfc-4863-9a1f-1111dc89554f" (UID: "36d9148b-0dfc-4863-9a1f-1111dc89554f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:30:26.590198 kubelet[3449]: I1216 12:30:26.590170 3449 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d9148b-0dfc-4863-9a1f-1111dc89554f-kube-api-access-cpmbd" (OuterVolumeSpecName: "kube-api-access-cpmbd") pod "36d9148b-0dfc-4863-9a1f-1111dc89554f" (UID: "36d9148b-0dfc-4863-9a1f-1111dc89554f"). InnerVolumeSpecName "kube-api-access-cpmbd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:30:26.590551 kubelet[3449]: I1216 12:30:26.590512 3449 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d9148b-0dfc-4863-9a1f-1111dc89554f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "36d9148b-0dfc-4863-9a1f-1111dc89554f" (UID: "36d9148b-0dfc-4863-9a1f-1111dc89554f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:30:26.632628 systemd[1]: var-lib-kubelet-pods-36d9148b\x2d0dfc\x2d4863\x2d9a1f\x2d1111dc89554f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcpmbd.mount: Deactivated successfully. Dec 16 12:30:26.632938 systemd[1]: var-lib-kubelet-pods-36d9148b\x2d0dfc\x2d4863\x2d9a1f\x2d1111dc89554f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:30:26.688115 kubelet[3449]: I1216 12:30:26.688043 3449 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cpmbd\" (UniqueName: \"kubernetes.io/projected/36d9148b-0dfc-4863-9a1f-1111dc89554f-kube-api-access-cpmbd\") on node \"ci-4459.2.2-a-719f16aeb7\" DevicePath \"\"" Dec 16 12:30:26.688115 kubelet[3449]: I1216 12:30:26.688083 3449 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/36d9148b-0dfc-4863-9a1f-1111dc89554f-whisker-backend-key-pair\") on node \"ci-4459.2.2-a-719f16aeb7\" DevicePath \"\"" Dec 16 12:30:26.688115 kubelet[3449]: I1216 12:30:26.688091 3449 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36d9148b-0dfc-4863-9a1f-1111dc89554f-whisker-ca-bundle\") on node \"ci-4459.2.2-a-719f16aeb7\" DevicePath \"\"" Dec 16 12:30:26.789703 systemd[1]: Removed slice kubepods-besteffort-pod36d9148b_0dfc_4863_9a1f_1111dc89554f.slice - libcontainer container kubepods-besteffort-pod36d9148b_0dfc_4863_9a1f_1111dc89554f.slice. Dec 16 12:30:26.803164 kubelet[3449]: I1216 12:30:26.802597 3449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-t46c2" podStartSLOduration=1.558800761 podStartE2EDuration="15.802582047s" podCreationTimestamp="2025-12-16 12:30:11 +0000 UTC" firstStartedPulling="2025-12-16 12:30:11.808698421 +0000 UTC m=+22.225079901" lastFinishedPulling="2025-12-16 12:30:26.052479755 +0000 UTC m=+36.468861187" observedRunningTime="2025-12-16 12:30:26.801801625 +0000 UTC m=+37.218183057" watchObservedRunningTime="2025-12-16 12:30:26.802582047 +0000 UTC m=+37.218963487" Dec 16 12:30:26.880909 systemd[1]: Created slice kubepods-besteffort-pod435c02b4_b196_420c_b959_5f71a5c70015.slice - libcontainer container kubepods-besteffort-pod435c02b4_b196_420c_b959_5f71a5c70015.slice. Dec 16 12:30:26.889376 kubelet[3449]: I1216 12:30:26.888743 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/435c02b4-b196-420c-b959-5f71a5c70015-whisker-backend-key-pair\") pod \"whisker-7b8547f758-vr28q\" (UID: \"435c02b4-b196-420c-b959-5f71a5c70015\") " pod="calico-system/whisker-7b8547f758-vr28q" Dec 16 12:30:26.889376 kubelet[3449]: I1216 12:30:26.888785 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/435c02b4-b196-420c-b959-5f71a5c70015-whisker-ca-bundle\") pod \"whisker-7b8547f758-vr28q\" (UID: \"435c02b4-b196-420c-b959-5f71a5c70015\") " pod="calico-system/whisker-7b8547f758-vr28q" Dec 16 12:30:26.889376 kubelet[3449]: I1216 12:30:26.888799 3449 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4wrq\" (UniqueName: \"kubernetes.io/projected/435c02b4-b196-420c-b959-5f71a5c70015-kube-api-access-q4wrq\") pod \"whisker-7b8547f758-vr28q\" (UID: \"435c02b4-b196-420c-b959-5f71a5c70015\") " pod="calico-system/whisker-7b8547f758-vr28q" Dec 16 12:30:27.190768 containerd[1921]: time="2025-12-16T12:30:27.190661810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b8547f758-vr28q,Uid:435c02b4-b196-420c-b959-5f71a5c70015,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:27.301620 systemd-networkd[1485]: cali2f6efceab71: Link UP Dec 16 12:30:27.301853 systemd-networkd[1485]: cali2f6efceab71: Gained carrier Dec 16 12:30:27.315682 containerd[1921]: 2025-12-16 12:30:27.214 [INFO][4502] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:30:27.315682 containerd[1921]: 2025-12-16 12:30:27.237 [INFO][4502] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-eth0 whisker-7b8547f758- calico-system 435c02b4-b196-420c-b959-5f71a5c70015 871 0 2025-12-16 12:30:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b8547f758 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.2-a-719f16aeb7 whisker-7b8547f758-vr28q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2f6efceab71 [] [] }} ContainerID="bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" Namespace="calico-system" Pod="whisker-7b8547f758-vr28q" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-" Dec 16 12:30:27.315682 containerd[1921]: 2025-12-16 12:30:27.237 [INFO][4502] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" Namespace="calico-system" Pod="whisker-7b8547f758-vr28q" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-eth0" Dec 16 12:30:27.315682 containerd[1921]: 2025-12-16 12:30:27.260 [INFO][4515] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" HandleID="k8s-pod-network.bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" Workload="ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-eth0" Dec 16 12:30:27.315902 containerd[1921]: 2025-12-16 12:30:27.260 [INFO][4515] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" HandleID="k8s-pod-network.bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" Workload="ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-719f16aeb7", "pod":"whisker-7b8547f758-vr28q", "timestamp":"2025-12-16 12:30:27.260809005 +0000 UTC"}, Hostname:"ci-4459.2.2-a-719f16aeb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:27.315902 containerd[1921]: 2025-12-16 12:30:27.260 [INFO][4515] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:27.315902 containerd[1921]: 2025-12-16 12:30:27.261 [INFO][4515] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:27.315902 containerd[1921]: 2025-12-16 12:30:27.261 [INFO][4515] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-719f16aeb7' Dec 16 12:30:27.315902 containerd[1921]: 2025-12-16 12:30:27.266 [INFO][4515] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:27.315902 containerd[1921]: 2025-12-16 12:30:27.269 [INFO][4515] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:27.315902 containerd[1921]: 2025-12-16 12:30:27.272 [INFO][4515] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:27.315902 containerd[1921]: 2025-12-16 12:30:27.273 [INFO][4515] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:27.315902 containerd[1921]: 2025-12-16 12:30:27.275 [INFO][4515] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:27.316033 containerd[1921]: 2025-12-16 12:30:27.275 [INFO][4515] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:27.316033 containerd[1921]: 2025-12-16 12:30:27.276 [INFO][4515] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d Dec 16 12:30:27.316033 containerd[1921]: 2025-12-16 12:30:27.286 [INFO][4515] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:27.316033 containerd[1921]: 2025-12-16 12:30:27.290 [INFO][4515] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.65/26] block=192.168.38.64/26 handle="k8s-pod-network.bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:27.316033 containerd[1921]: 2025-12-16 12:30:27.290 [INFO][4515] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.65/26] handle="k8s-pod-network.bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:27.316033 containerd[1921]: 2025-12-16 12:30:27.290 [INFO][4515] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:27.316033 containerd[1921]: 2025-12-16 12:30:27.291 [INFO][4515] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.65/26] IPv6=[] ContainerID="bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" HandleID="k8s-pod-network.bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" Workload="ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-eth0" Dec 16 12:30:27.316128 containerd[1921]: 2025-12-16 12:30:27.294 [INFO][4502] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" Namespace="calico-system" Pod="whisker-7b8547f758-vr28q" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-eth0", GenerateName:"whisker-7b8547f758-", Namespace:"calico-system", SelfLink:"", UID:"435c02b4-b196-420c-b959-5f71a5c70015", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b8547f758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"", Pod:"whisker-7b8547f758-vr28q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2f6efceab71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:27.316128 containerd[1921]: 2025-12-16 12:30:27.294 [INFO][4502] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.65/32] ContainerID="bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" Namespace="calico-system" Pod="whisker-7b8547f758-vr28q" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-eth0" Dec 16 12:30:27.316179 containerd[1921]: 2025-12-16 12:30:27.294 [INFO][4502] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f6efceab71 ContainerID="bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" Namespace="calico-system" Pod="whisker-7b8547f758-vr28q" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-eth0" Dec 16 12:30:27.316179 containerd[1921]: 2025-12-16 12:30:27.300 [INFO][4502] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" Namespace="calico-system" Pod="whisker-7b8547f758-vr28q" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-eth0" Dec 16 12:30:27.316208 containerd[1921]: 2025-12-16 12:30:27.300 [INFO][4502] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" Namespace="calico-system" Pod="whisker-7b8547f758-vr28q" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-eth0", GenerateName:"whisker-7b8547f758-", Namespace:"calico-system", SelfLink:"", UID:"435c02b4-b196-420c-b959-5f71a5c70015", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b8547f758", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d", Pod:"whisker-7b8547f758-vr28q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.38.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2f6efceab71", MAC:"b2:40:4f:12:72:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:27.316241 containerd[1921]: 2025-12-16 12:30:27.313 [INFO][4502] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" Namespace="calico-system" Pod="whisker-7b8547f758-vr28q" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-whisker--7b8547f758--vr28q-eth0" Dec 16 12:30:27.368125 containerd[1921]: time="2025-12-16T12:30:27.368049878Z" level=info msg="connecting to shim bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d" address="unix:///run/containerd/s/21e71b847ab14b74b66d96a3e99f3c8f441fa95a97ef2d11335970e58e5e773f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:27.387713 systemd[1]: Started cri-containerd-bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d.scope - libcontainer container bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d. Dec 16 12:30:27.419702 containerd[1921]: time="2025-12-16T12:30:27.419590420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b8547f758-vr28q,Uid:435c02b4-b196-420c-b959-5f71a5c70015,Namespace:calico-system,Attempt:0,} returns sandbox id \"bcbce98fc50dec4083a16e7ff4ebf04deaa1ffa2072f92094b7cb5e60d69c98d\"" Dec 16 12:30:27.422112 containerd[1921]: time="2025-12-16T12:30:27.422079401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:30:27.677001 kubelet[3449]: I1216 12:30:27.676782 3449 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d9148b-0dfc-4863-9a1f-1111dc89554f" path="/var/lib/kubelet/pods/36d9148b-0dfc-4863-9a1f-1111dc89554f/volumes" Dec 16 12:30:27.696464 containerd[1921]: time="2025-12-16T12:30:27.696385039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:27.700507 containerd[1921]: time="2025-12-16T12:30:27.700377806Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:30:27.700878 containerd[1921]: time="2025-12-16T12:30:27.700428087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:30:27.701137 kubelet[3449]: E1216 12:30:27.700792 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:30:27.701137 kubelet[3449]: E1216 12:30:27.700835 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:30:27.701137 kubelet[3449]: E1216 12:30:27.700906 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7b8547f758-vr28q_calico-system(435c02b4-b196-420c-b959-5f71a5c70015): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:27.702831 containerd[1921]: time="2025-12-16T12:30:27.702804089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:30:27.998006 containerd[1921]: time="2025-12-16T12:30:27.997741132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:28.314906 containerd[1921]: time="2025-12-16T12:30:28.314855861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:30:28.315385 containerd[1921]: time="2025-12-16T12:30:28.314864638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:30:28.315658 kubelet[3449]: E1216 12:30:28.315610 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:30:28.315940 kubelet[3449]: E1216 12:30:28.315667 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:30:28.315940 kubelet[3449]: E1216 12:30:28.315831 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7b8547f758-vr28q_calico-system(435c02b4-b196-420c-b959-5f71a5c70015): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:28.317998 kubelet[3449]: E1216 12:30:28.317674 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:30:28.526628 systemd-networkd[1485]: vxlan.calico: Link UP Dec 16 12:30:28.526634 systemd-networkd[1485]: vxlan.calico: Gained carrier Dec 16 12:30:28.731731 systemd-networkd[1485]: cali2f6efceab71: Gained IPv6LL Dec 16 12:30:28.792874 kubelet[3449]: E1216 12:30:28.792811 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:30:30.523719 systemd-networkd[1485]: vxlan.calico: Gained IPv6LL Dec 16 12:30:30.998614 kubelet[3449]: I1216 12:30:30.998546 3449 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:30:32.680800 containerd[1921]: time="2025-12-16T12:30:32.680737000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-22b49,Uid:1e1bdb89-484a-4c6c-abf3-1a7dd9af5720,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:32.685373 containerd[1921]: time="2025-12-16T12:30:32.685324509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-544fc6cbc8-s5srh,Uid:746d77af-37dd-4af0-98d6-e8786f6ddd62,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:32.840754 systemd-networkd[1485]: cali916e3052f81: Link UP Dec 16 12:30:32.844049 systemd-networkd[1485]: cali916e3052f81: Gained carrier Dec 16 12:30:32.862208 containerd[1921]: 2025-12-16 12:30:32.748 [INFO][4819] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-eth0 coredns-66bc5c9577- kube-system 1e1bdb89-484a-4c6c-abf3-1a7dd9af5720 801 0 2025-12-16 12:29:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-a-719f16aeb7 coredns-66bc5c9577-22b49 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali916e3052f81 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" Namespace="kube-system" Pod="coredns-66bc5c9577-22b49" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-" Dec 16 12:30:32.862208 containerd[1921]: 2025-12-16 12:30:32.748 [INFO][4819] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" Namespace="kube-system" Pod="coredns-66bc5c9577-22b49" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-eth0" Dec 16 12:30:32.862208 containerd[1921]: 2025-12-16 12:30:32.782 [INFO][4847] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" HandleID="k8s-pod-network.5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" Workload="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-eth0" Dec 16 12:30:32.862417 containerd[1921]: 2025-12-16 12:30:32.782 [INFO][4847] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" HandleID="k8s-pod-network.5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" Workload="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-a-719f16aeb7", "pod":"coredns-66bc5c9577-22b49", "timestamp":"2025-12-16 12:30:32.78206507 +0000 UTC"}, Hostname:"ci-4459.2.2-a-719f16aeb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:32.862417 containerd[1921]: 2025-12-16 12:30:32.782 [INFO][4847] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:32.862417 containerd[1921]: 2025-12-16 12:30:32.782 [INFO][4847] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:32.862417 containerd[1921]: 2025-12-16 12:30:32.782 [INFO][4847] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-719f16aeb7' Dec 16 12:30:32.862417 containerd[1921]: 2025-12-16 12:30:32.791 [INFO][4847] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.862417 containerd[1921]: 2025-12-16 12:30:32.796 [INFO][4847] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.862417 containerd[1921]: 2025-12-16 12:30:32.801 [INFO][4847] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.862417 containerd[1921]: 2025-12-16 12:30:32.803 [INFO][4847] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.862417 containerd[1921]: 2025-12-16 12:30:32.805 [INFO][4847] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.862607 containerd[1921]: 2025-12-16 12:30:32.806 [INFO][4847] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.862607 containerd[1921]: 2025-12-16 12:30:32.807 [INFO][4847] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d Dec 16 12:30:32.862607 containerd[1921]: 2025-12-16 12:30:32.813 [INFO][4847] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.862607 containerd[1921]: 2025-12-16 12:30:32.824 [INFO][4847] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.66/26] block=192.168.38.64/26 handle="k8s-pod-network.5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.862607 containerd[1921]: 2025-12-16 12:30:32.824 [INFO][4847] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.66/26] handle="k8s-pod-network.5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.862607 containerd[1921]: 2025-12-16 12:30:32.824 [INFO][4847] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:32.862607 containerd[1921]: 2025-12-16 12:30:32.824 [INFO][4847] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.66/26] IPv6=[] ContainerID="5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" HandleID="k8s-pod-network.5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" Workload="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-eth0" Dec 16 12:30:32.862863 containerd[1921]: 2025-12-16 12:30:32.828 [INFO][4819] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" Namespace="kube-system" Pod="coredns-66bc5c9577-22b49" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1e1bdb89-484a-4c6c-abf3-1a7dd9af5720", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 29, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"", Pod:"coredns-66bc5c9577-22b49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali916e3052f81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:32.862863 containerd[1921]: 2025-12-16 12:30:32.828 [INFO][4819] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.66/32] ContainerID="5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" Namespace="kube-system" Pod="coredns-66bc5c9577-22b49" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-eth0" Dec 16 12:30:32.862863 containerd[1921]: 2025-12-16 12:30:32.828 [INFO][4819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali916e3052f81 ContainerID="5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" Namespace="kube-system" Pod="coredns-66bc5c9577-22b49" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-eth0" Dec 16 12:30:32.862863 containerd[1921]: 2025-12-16 12:30:32.843 [INFO][4819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" Namespace="kube-system" Pod="coredns-66bc5c9577-22b49" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-eth0" Dec 16 12:30:32.862863 containerd[1921]: 2025-12-16 12:30:32.844 [INFO][4819] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" Namespace="kube-system" Pod="coredns-66bc5c9577-22b49" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"1e1bdb89-484a-4c6c-abf3-1a7dd9af5720", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 29, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d", Pod:"coredns-66bc5c9577-22b49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali916e3052f81", MAC:"16:04:8b:47:52:94", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:32.863122 containerd[1921]: 2025-12-16 12:30:32.859 [INFO][4819] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" Namespace="kube-system" Pod="coredns-66bc5c9577-22b49" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--22b49-eth0" Dec 16 12:30:32.915160 containerd[1921]: time="2025-12-16T12:30:32.915110303Z" level=info msg="connecting to shim 5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d" address="unix:///run/containerd/s/61fb4e2e0f5f3fbfdd1e25e7f60ad1979170008e52a069a0d92ed86359d65266" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:32.935122 systemd-networkd[1485]: caliefeb2d3a644: Link UP Dec 16 12:30:32.937114 systemd-networkd[1485]: caliefeb2d3a644: Gained carrier Dec 16 12:30:32.959322 systemd[1]: Started cri-containerd-5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d.scope - libcontainer container 5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d. Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.746 [INFO][4827] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-eth0 calico-kube-controllers-544fc6cbc8- calico-system 746d77af-37dd-4af0-98d6-e8786f6ddd62 805 0 2025-12-16 12:30:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:544fc6cbc8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.2-a-719f16aeb7 calico-kube-controllers-544fc6cbc8-s5srh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliefeb2d3a644 [] [] }} ContainerID="e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" Namespace="calico-system" Pod="calico-kube-controllers-544fc6cbc8-s5srh" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-" Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.746 [INFO][4827] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" Namespace="calico-system" Pod="calico-kube-controllers-544fc6cbc8-s5srh" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-eth0" Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.787 [INFO][4842] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" HandleID="k8s-pod-network.e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" Workload="ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-eth0" Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.788 [INFO][4842] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" HandleID="k8s-pod-network.e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" Workload="ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-719f16aeb7", "pod":"calico-kube-controllers-544fc6cbc8-s5srh", "timestamp":"2025-12-16 12:30:32.787503259 +0000 UTC"}, Hostname:"ci-4459.2.2-a-719f16aeb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.788 [INFO][4842] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.824 [INFO][4842] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.824 [INFO][4842] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-719f16aeb7' Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.891 [INFO][4842] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.897 [INFO][4842] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.901 [INFO][4842] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.905 [INFO][4842] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.908 [INFO][4842] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.908 [INFO][4842] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.910 [INFO][4842] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.919 [INFO][4842] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.927 [INFO][4842] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.67/26] block=192.168.38.64/26 handle="k8s-pod-network.e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.927 [INFO][4842] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.67/26] handle="k8s-pod-network.e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.927 [INFO][4842] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:32.962037 containerd[1921]: 2025-12-16 12:30:32.927 [INFO][4842] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.67/26] IPv6=[] ContainerID="e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" HandleID="k8s-pod-network.e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" Workload="ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-eth0" Dec 16 12:30:32.962451 containerd[1921]: 2025-12-16 12:30:32.932 [INFO][4827] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" Namespace="calico-system" Pod="calico-kube-controllers-544fc6cbc8-s5srh" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-eth0", GenerateName:"calico-kube-controllers-544fc6cbc8-", Namespace:"calico-system", SelfLink:"", UID:"746d77af-37dd-4af0-98d6-e8786f6ddd62", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"544fc6cbc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"", Pod:"calico-kube-controllers-544fc6cbc8-s5srh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefeb2d3a644", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:32.962451 containerd[1921]: 2025-12-16 12:30:32.932 [INFO][4827] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.67/32] ContainerID="e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" Namespace="calico-system" Pod="calico-kube-controllers-544fc6cbc8-s5srh" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-eth0" Dec 16 12:30:32.962451 containerd[1921]: 2025-12-16 12:30:32.932 [INFO][4827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefeb2d3a644 ContainerID="e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" Namespace="calico-system" Pod="calico-kube-controllers-544fc6cbc8-s5srh" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-eth0" Dec 16 12:30:32.962451 containerd[1921]: 2025-12-16 12:30:32.937 [INFO][4827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" Namespace="calico-system" Pod="calico-kube-controllers-544fc6cbc8-s5srh" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-eth0" Dec 16 12:30:32.962451 containerd[1921]: 2025-12-16 12:30:32.938 [INFO][4827] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" Namespace="calico-system" Pod="calico-kube-controllers-544fc6cbc8-s5srh" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-eth0", GenerateName:"calico-kube-controllers-544fc6cbc8-", Namespace:"calico-system", SelfLink:"", UID:"746d77af-37dd-4af0-98d6-e8786f6ddd62", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"544fc6cbc8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b", Pod:"calico-kube-controllers-544fc6cbc8-s5srh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefeb2d3a644", MAC:"86:74:65:51:c1:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:32.962451 containerd[1921]: 2025-12-16 12:30:32.958 [INFO][4827] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" Namespace="calico-system" Pod="calico-kube-controllers-544fc6cbc8-s5srh" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--kube--controllers--544fc6cbc8--s5srh-eth0" Dec 16 12:30:33.011869 containerd[1921]: time="2025-12-16T12:30:33.011829199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-22b49,Uid:1e1bdb89-484a-4c6c-abf3-1a7dd9af5720,Namespace:kube-system,Attempt:0,} returns sandbox id \"5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d\"" Dec 16 12:30:33.017035 containerd[1921]: time="2025-12-16T12:30:33.016980884Z" level=info msg="connecting to shim e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b" address="unix:///run/containerd/s/4686a196dd014f7aac4baf9efcffc35a803c596f2af9a81d5c10edada83c21f0" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:33.023603 containerd[1921]: time="2025-12-16T12:30:33.023277736Z" level=info msg="CreateContainer within sandbox \"5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:30:33.043748 systemd[1]: Started cri-containerd-e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b.scope - libcontainer container e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b. Dec 16 12:30:33.054554 containerd[1921]: time="2025-12-16T12:30:33.054399762Z" level=info msg="Container 36e3d198c8c124e765c6dbdaa510760e95904c04648bf7c21153ec11ac2105ac: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:33.073894 containerd[1921]: time="2025-12-16T12:30:33.073852053Z" level=info msg="CreateContainer within sandbox \"5154952d0e63baae0b6ba604b6f058c3542c7d61bf9948ba04e142397cd62b4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"36e3d198c8c124e765c6dbdaa510760e95904c04648bf7c21153ec11ac2105ac\"" Dec 16 12:30:33.075219 containerd[1921]: time="2025-12-16T12:30:33.075055069Z" level=info msg="StartContainer for \"36e3d198c8c124e765c6dbdaa510760e95904c04648bf7c21153ec11ac2105ac\"" Dec 16 12:30:33.078539 containerd[1921]: time="2025-12-16T12:30:33.077553890Z" level=info msg="connecting to shim 36e3d198c8c124e765c6dbdaa510760e95904c04648bf7c21153ec11ac2105ac" address="unix:///run/containerd/s/61fb4e2e0f5f3fbfdd1e25e7f60ad1979170008e52a069a0d92ed86359d65266" protocol=ttrpc version=3 Dec 16 12:30:33.083315 containerd[1921]: time="2025-12-16T12:30:33.083284190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-544fc6cbc8-s5srh,Uid:746d77af-37dd-4af0-98d6-e8786f6ddd62,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9925b0f1787219f2068928923a3c47e754d31ccbc12cf7ae5e8c70a0be2e56b\"" Dec 16 12:30:33.089206 containerd[1921]: time="2025-12-16T12:30:33.089155526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:30:33.107721 systemd[1]: Started cri-containerd-36e3d198c8c124e765c6dbdaa510760e95904c04648bf7c21153ec11ac2105ac.scope - libcontainer container 36e3d198c8c124e765c6dbdaa510760e95904c04648bf7c21153ec11ac2105ac. Dec 16 12:30:33.182982 containerd[1921]: time="2025-12-16T12:30:33.182932375Z" level=info msg="StartContainer for \"36e3d198c8c124e765c6dbdaa510760e95904c04648bf7c21153ec11ac2105ac\" returns successfully" Dec 16 12:30:33.346816 containerd[1921]: time="2025-12-16T12:30:33.346765768Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:33.351402 containerd[1921]: time="2025-12-16T12:30:33.351300156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:30:33.351402 containerd[1921]: time="2025-12-16T12:30:33.351354349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:30:33.353572 kubelet[3449]: E1216 12:30:33.352736 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:30:33.353572 kubelet[3449]: E1216 12:30:33.352792 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:30:33.353572 kubelet[3449]: E1216 12:30:33.352867 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-544fc6cbc8-s5srh_calico-system(746d77af-37dd-4af0-98d6-e8786f6ddd62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:33.353572 kubelet[3449]: E1216 12:30:33.352903 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:30:33.681115 containerd[1921]: time="2025-12-16T12:30:33.680994837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6879cffd7b-mkgxz,Uid:12a67fa6-b880-41bc-a39b-0d6c266384bf,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:30:33.808314 kubelet[3449]: E1216 12:30:33.808182 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:30:33.816028 systemd-networkd[1485]: calif7f9ed53282: Link UP Dec 16 12:30:33.816253 systemd-networkd[1485]: calif7f9ed53282: Gained carrier Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.723 [INFO][5001] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-eth0 calico-apiserver-6879cffd7b- calico-apiserver 12a67fa6-b880-41bc-a39b-0d6c266384bf 803 0 2025-12-16 12:30:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6879cffd7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-a-719f16aeb7 calico-apiserver-6879cffd7b-mkgxz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif7f9ed53282 [] [] }} ContainerID="a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-mkgxz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-" Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.723 [INFO][5001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-mkgxz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-eth0" Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.758 [INFO][5014] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" HandleID="k8s-pod-network.a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" Workload="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-eth0" Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.758 [INFO][5014] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" HandleID="k8s-pod-network.a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" Workload="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031e550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-a-719f16aeb7", "pod":"calico-apiserver-6879cffd7b-mkgxz", "timestamp":"2025-12-16 12:30:33.758529218 +0000 UTC"}, Hostname:"ci-4459.2.2-a-719f16aeb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.758 [INFO][5014] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.758 [INFO][5014] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.758 [INFO][5014] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-719f16aeb7' Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.767 [INFO][5014] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.771 [INFO][5014] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.775 [INFO][5014] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.778 [INFO][5014] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.783 [INFO][5014] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.783 [INFO][5014] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.784 [INFO][5014] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78 Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.789 [INFO][5014] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.797 [INFO][5014] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.68/26] block=192.168.38.64/26 handle="k8s-pod-network.a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.797 [INFO][5014] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.68/26] handle="k8s-pod-network.a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.797 [INFO][5014] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:33.838060 containerd[1921]: 2025-12-16 12:30:33.798 [INFO][5014] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.68/26] IPv6=[] ContainerID="a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" HandleID="k8s-pod-network.a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" Workload="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-eth0" Dec 16 12:30:33.838653 containerd[1921]: 2025-12-16 12:30:33.801 [INFO][5001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-mkgxz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-eth0", GenerateName:"calico-apiserver-6879cffd7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"12a67fa6-b880-41bc-a39b-0d6c266384bf", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6879cffd7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"", Pod:"calico-apiserver-6879cffd7b-mkgxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7f9ed53282", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:33.838653 containerd[1921]: 2025-12-16 12:30:33.801 [INFO][5001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.68/32] ContainerID="a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-mkgxz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-eth0" Dec 16 12:30:33.838653 containerd[1921]: 2025-12-16 12:30:33.801 [INFO][5001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7f9ed53282 ContainerID="a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-mkgxz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-eth0" Dec 16 12:30:33.838653 containerd[1921]: 2025-12-16 12:30:33.815 [INFO][5001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-mkgxz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-eth0" Dec 16 12:30:33.838653 containerd[1921]: 2025-12-16 12:30:33.816 [INFO][5001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-mkgxz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-eth0", GenerateName:"calico-apiserver-6879cffd7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"12a67fa6-b880-41bc-a39b-0d6c266384bf", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6879cffd7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78", Pod:"calico-apiserver-6879cffd7b-mkgxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif7f9ed53282", MAC:"9a:13:5e:04:93:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:33.838653 containerd[1921]: 2025-12-16 12:30:33.834 [INFO][5001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-mkgxz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--mkgxz-eth0" Dec 16 12:30:33.849752 kubelet[3449]: I1216 12:30:33.849681 3449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-22b49" podStartSLOduration=38.849662682 podStartE2EDuration="38.849662682s" podCreationTimestamp="2025-12-16 12:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:33.848753937 +0000 UTC m=+44.265135369" watchObservedRunningTime="2025-12-16 12:30:33.849662682 +0000 UTC m=+44.266044114" Dec 16 12:30:33.897079 containerd[1921]: time="2025-12-16T12:30:33.897027551Z" level=info msg="connecting to shim a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78" address="unix:///run/containerd/s/b6fd76165c2a0b938b19520bb3b98b3bff28cd9474c9a7bf8fd7b9931558c241" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:33.926851 systemd[1]: Started cri-containerd-a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78.scope - libcontainer container a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78. Dec 16 12:30:33.980109 systemd-networkd[1485]: cali916e3052f81: Gained IPv6LL Dec 16 12:30:33.991223 containerd[1921]: time="2025-12-16T12:30:33.991107224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6879cffd7b-mkgxz,Uid:12a67fa6-b880-41bc-a39b-0d6c266384bf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a220d0d40545bfd012c632cd0c9e70ce1164e48d00ecd1b76e53b23b89b36e78\"" Dec 16 12:30:33.995002 containerd[1921]: time="2025-12-16T12:30:33.994748203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:30:34.257263 containerd[1921]: time="2025-12-16T12:30:34.257125199Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:34.260981 containerd[1921]: time="2025-12-16T12:30:34.260932238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:30:34.261088 containerd[1921]: time="2025-12-16T12:30:34.261044682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:34.261274 kubelet[3449]: E1216 12:30:34.261239 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:34.261395 kubelet[3449]: E1216 12:30:34.261378 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:34.261747 kubelet[3449]: E1216 12:30:34.261724 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6879cffd7b-mkgxz_calico-apiserver(12a67fa6-b880-41bc-a39b-0d6c266384bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:34.261890 kubelet[3449]: E1216 12:30:34.261826 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:30:34.427723 systemd-networkd[1485]: caliefeb2d3a644: Gained IPv6LL Dec 16 12:30:34.681893 containerd[1921]: time="2025-12-16T12:30:34.681836890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-d9x7j,Uid:4dc67b6f-ad58-4f41-a48a-50245539bb0d,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:34.795176 systemd-networkd[1485]: cali8459e532f2c: Link UP Dec 16 12:30:34.797019 systemd-networkd[1485]: cali8459e532f2c: Gained carrier Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.726 [INFO][5080] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-eth0 goldmane-7c778bb748- calico-system 4dc67b6f-ad58-4f41-a48a-50245539bb0d 802 0 2025-12-16 12:30:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.2-a-719f16aeb7 goldmane-7c778bb748-d9x7j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8459e532f2c [] [] }} ContainerID="7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" Namespace="calico-system" Pod="goldmane-7c778bb748-d9x7j" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-" Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.726 [INFO][5080] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" Namespace="calico-system" Pod="goldmane-7c778bb748-d9x7j" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-eth0" Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.754 [INFO][5093] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" HandleID="k8s-pod-network.7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" Workload="ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-eth0" Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.754 [INFO][5093] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" HandleID="k8s-pod-network.7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" Workload="ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-719f16aeb7", "pod":"goldmane-7c778bb748-d9x7j", "timestamp":"2025-12-16 12:30:34.754684071 +0000 UTC"}, Hostname:"ci-4459.2.2-a-719f16aeb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.755 [INFO][5093] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.755 [INFO][5093] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.755 [INFO][5093] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-719f16aeb7' Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.761 [INFO][5093] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.765 [INFO][5093] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.768 [INFO][5093] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.770 [INFO][5093] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.772 [INFO][5093] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.772 [INFO][5093] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.774 [INFO][5093] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.779 [INFO][5093] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.788 [INFO][5093] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.69/26] block=192.168.38.64/26 handle="k8s-pod-network.7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.788 [INFO][5093] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.69/26] handle="k8s-pod-network.7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.788 [INFO][5093] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:34.814070 containerd[1921]: 2025-12-16 12:30:34.788 [INFO][5093] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.69/26] IPv6=[] ContainerID="7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" HandleID="k8s-pod-network.7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" Workload="ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-eth0" Dec 16 12:30:34.816095 containerd[1921]: 2025-12-16 12:30:34.790 [INFO][5080] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" Namespace="calico-system" Pod="goldmane-7c778bb748-d9x7j" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4dc67b6f-ad58-4f41-a48a-50245539bb0d", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"", Pod:"goldmane-7c778bb748-d9x7j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8459e532f2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:34.816095 containerd[1921]: 2025-12-16 12:30:34.790 [INFO][5080] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.69/32] ContainerID="7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" Namespace="calico-system" Pod="goldmane-7c778bb748-d9x7j" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-eth0" Dec 16 12:30:34.816095 containerd[1921]: 2025-12-16 12:30:34.790 [INFO][5080] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8459e532f2c ContainerID="7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" Namespace="calico-system" Pod="goldmane-7c778bb748-d9x7j" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-eth0" Dec 16 12:30:34.816095 containerd[1921]: 2025-12-16 12:30:34.795 [INFO][5080] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" Namespace="calico-system" Pod="goldmane-7c778bb748-d9x7j" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-eth0" Dec 16 12:30:34.816095 containerd[1921]: 2025-12-16 12:30:34.796 [INFO][5080] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" Namespace="calico-system" Pod="goldmane-7c778bb748-d9x7j" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4dc67b6f-ad58-4f41-a48a-50245539bb0d", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b", Pod:"goldmane-7c778bb748-d9x7j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8459e532f2c", MAC:"e2:da:44:62:6e:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:34.816095 containerd[1921]: 2025-12-16 12:30:34.810 [INFO][5080] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" Namespace="calico-system" Pod="goldmane-7c778bb748-d9x7j" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-goldmane--7c778bb748--d9x7j-eth0" Dec 16 12:30:34.820618 kubelet[3449]: E1216 12:30:34.820372 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:30:34.821906 kubelet[3449]: E1216 12:30:34.821379 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:30:34.873944 containerd[1921]: time="2025-12-16T12:30:34.873153010Z" level=info msg="connecting to shim 7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b" address="unix:///run/containerd/s/afd227ef29139d9ad27847e847a103210a85ad1137aee6856f7274d401bde1d2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:34.899807 systemd[1]: Started cri-containerd-7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b.scope - libcontainer container 7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b. Dec 16 12:30:34.954201 containerd[1921]: time="2025-12-16T12:30:34.954054314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-d9x7j,Uid:4dc67b6f-ad58-4f41-a48a-50245539bb0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e34ccc75bba4233bcab4792300660d1b1df765fe517fa3922193bac5e3c477b\"" Dec 16 12:30:34.957011 containerd[1921]: time="2025-12-16T12:30:34.956982530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:30:35.278937 containerd[1921]: time="2025-12-16T12:30:35.278865022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:35.282695 containerd[1921]: time="2025-12-16T12:30:35.282633733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:30:35.282815 containerd[1921]: time="2025-12-16T12:30:35.282797650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:35.283025 kubelet[3449]: E1216 12:30:35.282989 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:35.283096 kubelet[3449]: E1216 12:30:35.283040 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:35.283143 kubelet[3449]: E1216 12:30:35.283125 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-d9x7j_calico-system(4dc67b6f-ad58-4f41-a48a-50245539bb0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:35.283261 kubelet[3449]: E1216 12:30:35.283237 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:30:35.323747 systemd-networkd[1485]: calif7f9ed53282: Gained IPv6LL Dec 16 12:30:35.681604 containerd[1921]: time="2025-12-16T12:30:35.681452254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6879cffd7b-88t7g,Uid:8aede75b-84c4-4230-866b-6bdfa406b3b1,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:30:35.686749 containerd[1921]: time="2025-12-16T12:30:35.686709165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fbpsm,Uid:8dc2a4ce-1cc0-4206-a57c-f0513b577cd6,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:35.805773 systemd-networkd[1485]: cali9dfd06db068: Link UP Dec 16 12:30:35.806001 systemd-networkd[1485]: cali9dfd06db068: Gained carrier Dec 16 12:30:35.825795 kubelet[3449]: E1216 12:30:35.825760 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.731 [INFO][5159] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-eth0 calico-apiserver-6879cffd7b- calico-apiserver 8aede75b-84c4-4230-866b-6bdfa406b3b1 806 0 2025-12-16 12:30:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6879cffd7b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-a-719f16aeb7 calico-apiserver-6879cffd7b-88t7g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9dfd06db068 [] [] }} ContainerID="6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-88t7g" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-" Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.732 [INFO][5159] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-88t7g" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-eth0" Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.761 [INFO][5185] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" HandleID="k8s-pod-network.6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" Workload="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-eth0" Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.762 [INFO][5185] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" HandleID="k8s-pod-network.6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" Workload="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-a-719f16aeb7", "pod":"calico-apiserver-6879cffd7b-88t7g", "timestamp":"2025-12-16 12:30:35.761741102 +0000 UTC"}, Hostname:"ci-4459.2.2-a-719f16aeb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.762 [INFO][5185] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.762 [INFO][5185] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.762 [INFO][5185] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-719f16aeb7' Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.767 [INFO][5185] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.771 [INFO][5185] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.774 [INFO][5185] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.775 [INFO][5185] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.778 [INFO][5185] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.778 [INFO][5185] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.780 [INFO][5185] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.784 [INFO][5185] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.795 [INFO][5185] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.70/26] block=192.168.38.64/26 handle="k8s-pod-network.6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.795 [INFO][5185] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.70/26] handle="k8s-pod-network.6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.796 [INFO][5185] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:35.827482 containerd[1921]: 2025-12-16 12:30:35.796 [INFO][5185] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.70/26] IPv6=[] ContainerID="6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" HandleID="k8s-pod-network.6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" Workload="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-eth0" Dec 16 12:30:35.828575 containerd[1921]: 2025-12-16 12:30:35.799 [INFO][5159] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-88t7g" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-eth0", GenerateName:"calico-apiserver-6879cffd7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8aede75b-84c4-4230-866b-6bdfa406b3b1", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6879cffd7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"", Pod:"calico-apiserver-6879cffd7b-88t7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dfd06db068", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:35.828575 containerd[1921]: 2025-12-16 12:30:35.801 [INFO][5159] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.70/32] ContainerID="6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-88t7g" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-eth0" Dec 16 12:30:35.828575 containerd[1921]: 2025-12-16 12:30:35.801 [INFO][5159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9dfd06db068 ContainerID="6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-88t7g" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-eth0" Dec 16 12:30:35.828575 containerd[1921]: 2025-12-16 12:30:35.807 [INFO][5159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-88t7g" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-eth0" Dec 16 12:30:35.828575 containerd[1921]: 2025-12-16 12:30:35.809 [INFO][5159] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-88t7g" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-eth0", GenerateName:"calico-apiserver-6879cffd7b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8aede75b-84c4-4230-866b-6bdfa406b3b1", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6879cffd7b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b", Pod:"calico-apiserver-6879cffd7b-88t7g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9dfd06db068", MAC:"b2:f7:73:ae:fa:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:35.828575 containerd[1921]: 2025-12-16 12:30:35.824 [INFO][5159] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" Namespace="calico-apiserver" Pod="calico-apiserver-6879cffd7b-88t7g" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-calico--apiserver--6879cffd7b--88t7g-eth0" Dec 16 12:30:35.831583 kubelet[3449]: E1216 12:30:35.829027 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:30:35.895581 containerd[1921]: time="2025-12-16T12:30:35.895514602Z" level=info msg="connecting to shim 6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b" address="unix:///run/containerd/s/e102301734506b183186bf05570445029ad3b0ada032f2d43d25708b494da21e" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:35.926762 systemd[1]: Started cri-containerd-6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b.scope - libcontainer container 6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b. Dec 16 12:30:35.937871 systemd-networkd[1485]: calicb5c8a2607b: Link UP Dec 16 12:30:35.940654 systemd-networkd[1485]: calicb5c8a2607b: Gained carrier Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.740 [INFO][5169] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-eth0 csi-node-driver- calico-system 8dc2a4ce-1cc0-4206-a57c-f0513b577cd6 709 0 2025-12-16 12:30:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.2-a-719f16aeb7 csi-node-driver-fbpsm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicb5c8a2607b [] [] }} ContainerID="2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" Namespace="calico-system" Pod="csi-node-driver-fbpsm" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-" Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.740 [INFO][5169] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" Namespace="calico-system" Pod="csi-node-driver-fbpsm" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-eth0" Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.763 [INFO][5190] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" HandleID="k8s-pod-network.2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" Workload="ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-eth0" Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.763 [INFO][5190] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" HandleID="k8s-pod-network.2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" Workload="ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-719f16aeb7", "pod":"csi-node-driver-fbpsm", "timestamp":"2025-12-16 12:30:35.763733364 +0000 UTC"}, Hostname:"ci-4459.2.2-a-719f16aeb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.763 [INFO][5190] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.796 [INFO][5190] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.796 [INFO][5190] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-719f16aeb7' Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.868 [INFO][5190] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.881 [INFO][5190] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.887 [INFO][5190] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.890 [INFO][5190] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.894 [INFO][5190] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.894 [INFO][5190] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.896 [INFO][5190] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11 Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.906 [INFO][5190] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.919 [INFO][5190] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.71/26] block=192.168.38.64/26 handle="k8s-pod-network.2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.920 [INFO][5190] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.71/26] handle="k8s-pod-network.2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.920 [INFO][5190] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:35.960107 containerd[1921]: 2025-12-16 12:30:35.920 [INFO][5190] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.71/26] IPv6=[] ContainerID="2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" HandleID="k8s-pod-network.2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" Workload="ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-eth0" Dec 16 12:30:35.961192 containerd[1921]: 2025-12-16 12:30:35.924 [INFO][5169] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" Namespace="calico-system" Pod="csi-node-driver-fbpsm" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8dc2a4ce-1cc0-4206-a57c-f0513b577cd6", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"", Pod:"csi-node-driver-fbpsm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb5c8a2607b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:35.961192 containerd[1921]: 2025-12-16 12:30:35.925 [INFO][5169] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.71/32] ContainerID="2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" Namespace="calico-system" Pod="csi-node-driver-fbpsm" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-eth0" Dec 16 12:30:35.961192 containerd[1921]: 2025-12-16 12:30:35.925 [INFO][5169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb5c8a2607b ContainerID="2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" Namespace="calico-system" Pod="csi-node-driver-fbpsm" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-eth0" Dec 16 12:30:35.961192 containerd[1921]: 2025-12-16 12:30:35.943 [INFO][5169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" Namespace="calico-system" Pod="csi-node-driver-fbpsm" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-eth0" Dec 16 12:30:35.961192 containerd[1921]: 2025-12-16 12:30:35.943 [INFO][5169] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" Namespace="calico-system" Pod="csi-node-driver-fbpsm" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8dc2a4ce-1cc0-4206-a57c-f0513b577cd6", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11", Pod:"csi-node-driver-fbpsm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb5c8a2607b", MAC:"4e:4b:d6:a1:55:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:35.961192 containerd[1921]: 2025-12-16 12:30:35.956 [INFO][5169] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" Namespace="calico-system" Pod="csi-node-driver-fbpsm" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-csi--node--driver--fbpsm-eth0" Dec 16 12:30:36.021851 containerd[1921]: time="2025-12-16T12:30:36.021799050Z" level=info msg="connecting to shim 2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11" address="unix:///run/containerd/s/ae0a70bc2f244feeb87ef39d54158fe5d74f95c93b2db14d7ef0b17434a31e85" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:36.026029 containerd[1921]: time="2025-12-16T12:30:36.025988148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6879cffd7b-88t7g,Uid:8aede75b-84c4-4230-866b-6bdfa406b3b1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6aca68dbe56b3bd5683c26865cfa0b1b7a611729e81f2e664d800865e4b6158b\"" Dec 16 12:30:36.030996 containerd[1921]: time="2025-12-16T12:30:36.030931795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:30:36.057804 systemd[1]: Started cri-containerd-2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11.scope - libcontainer container 2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11. Dec 16 12:30:36.095885 containerd[1921]: time="2025-12-16T12:30:36.095818167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fbpsm,Uid:8dc2a4ce-1cc0-4206-a57c-f0513b577cd6,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f5b4129b641b91b682433febbdbd2a54a4b23076b57db12576fcf014f3c5e11\"" Dec 16 12:30:36.336035 containerd[1921]: time="2025-12-16T12:30:36.335929219Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:36.340706 containerd[1921]: time="2025-12-16T12:30:36.340593818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:30:36.340706 containerd[1921]: time="2025-12-16T12:30:36.340662868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:36.341194 kubelet[3449]: E1216 12:30:36.341064 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:36.341194 kubelet[3449]: E1216 12:30:36.341120 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:36.341571 kubelet[3449]: E1216 12:30:36.341307 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6879cffd7b-88t7g_calico-apiserver(8aede75b-84c4-4230-866b-6bdfa406b3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:36.341571 kubelet[3449]: E1216 12:30:36.341339 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:30:36.342297 containerd[1921]: time="2025-12-16T12:30:36.342181781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:30:36.411767 systemd-networkd[1485]: cali8459e532f2c: Gained IPv6LL Dec 16 12:30:36.604004 containerd[1921]: time="2025-12-16T12:30:36.603497836Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:36.606897 containerd[1921]: time="2025-12-16T12:30:36.606850223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:30:36.607320 containerd[1921]: time="2025-12-16T12:30:36.606953658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:30:36.607369 kubelet[3449]: E1216 12:30:36.607251 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:36.607369 kubelet[3449]: E1216 12:30:36.607303 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:36.607750 kubelet[3449]: E1216 12:30:36.607552 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-fbpsm_calico-system(8dc2a4ce-1cc0-4206-a57c-f0513b577cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:36.609265 containerd[1921]: time="2025-12-16T12:30:36.609216320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:30:36.681849 containerd[1921]: time="2025-12-16T12:30:36.681796605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d45bz,Uid:57fe338c-87eb-41e3-8a54-022000644fe1,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:36.805600 systemd-networkd[1485]: cali265a2b68968: Link UP Dec 16 12:30:36.806343 systemd-networkd[1485]: cali265a2b68968: Gained carrier Dec 16 12:30:36.830345 kubelet[3449]: E1216 12:30:36.830285 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:30:36.834071 kubelet[3449]: E1216 12:30:36.834020 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.717 [INFO][5311] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-eth0 coredns-66bc5c9577- kube-system 57fe338c-87eb-41e3-8a54-022000644fe1 800 0 2025-12-16 12:29:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-a-719f16aeb7 coredns-66bc5c9577-d45bz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali265a2b68968 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" Namespace="kube-system" Pod="coredns-66bc5c9577-d45bz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-" Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.718 [INFO][5311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" Namespace="kube-system" Pod="coredns-66bc5c9577-d45bz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-eth0" Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.739 [INFO][5324] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" HandleID="k8s-pod-network.16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" Workload="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-eth0" Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.739 [INFO][5324] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" HandleID="k8s-pod-network.16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" Workload="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d30f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-a-719f16aeb7", "pod":"coredns-66bc5c9577-d45bz", "timestamp":"2025-12-16 12:30:36.739709347 +0000 UTC"}, Hostname:"ci-4459.2.2-a-719f16aeb7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.739 [INFO][5324] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.739 [INFO][5324] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.739 [INFO][5324] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-719f16aeb7' Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.745 [INFO][5324] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.748 [INFO][5324] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.752 [INFO][5324] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.753 [INFO][5324] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.758 [INFO][5324] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.759 [INFO][5324] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.762 [INFO][5324] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1 Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.777 [INFO][5324] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.797 [INFO][5324] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.72/26] block=192.168.38.64/26 handle="k8s-pod-network.16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.797 [INFO][5324] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.72/26] handle="k8s-pod-network.16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" host="ci-4459.2.2-a-719f16aeb7" Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.797 [INFO][5324] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:36.841245 containerd[1921]: 2025-12-16 12:30:36.797 [INFO][5324] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.72/26] IPv6=[] ContainerID="16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" HandleID="k8s-pod-network.16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" Workload="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-eth0" Dec 16 12:30:36.842969 containerd[1921]: 2025-12-16 12:30:36.799 [INFO][5311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" Namespace="kube-system" Pod="coredns-66bc5c9577-d45bz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"57fe338c-87eb-41e3-8a54-022000644fe1", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 29, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"", Pod:"coredns-66bc5c9577-d45bz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali265a2b68968", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:36.842969 containerd[1921]: 2025-12-16 12:30:36.800 [INFO][5311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.72/32] ContainerID="16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" Namespace="kube-system" Pod="coredns-66bc5c9577-d45bz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-eth0" Dec 16 12:30:36.842969 containerd[1921]: 2025-12-16 12:30:36.800 [INFO][5311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali265a2b68968 ContainerID="16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" Namespace="kube-system" Pod="coredns-66bc5c9577-d45bz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-eth0" Dec 16 12:30:36.842969 containerd[1921]: 2025-12-16 12:30:36.807 [INFO][5311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" Namespace="kube-system" Pod="coredns-66bc5c9577-d45bz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-eth0" Dec 16 12:30:36.842969 containerd[1921]: 2025-12-16 12:30:36.809 [INFO][5311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" Namespace="kube-system" Pod="coredns-66bc5c9577-d45bz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"57fe338c-87eb-41e3-8a54-022000644fe1", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 29, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-719f16aeb7", ContainerID:"16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1", Pod:"coredns-66bc5c9577-d45bz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali265a2b68968", MAC:"62:bc:d1:89:20:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:36.843095 containerd[1921]: 2025-12-16 12:30:36.837 [INFO][5311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" Namespace="kube-system" Pod="coredns-66bc5c9577-d45bz" WorkloadEndpoint="ci--4459.2.2--a--719f16aeb7-k8s-coredns--66bc5c9577--d45bz-eth0" Dec 16 12:30:36.851365 containerd[1921]: time="2025-12-16T12:30:36.851291073Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:36.857053 containerd[1921]: time="2025-12-16T12:30:36.856684844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:30:36.857710 containerd[1921]: time="2025-12-16T12:30:36.857682959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:30:36.858338 kubelet[3449]: E1216 12:30:36.858117 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:36.858338 kubelet[3449]: E1216 12:30:36.858173 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:36.859045 kubelet[3449]: E1216 12:30:36.858897 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-fbpsm_calico-system(8dc2a4ce-1cc0-4206-a57c-f0513b577cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:36.859045 kubelet[3449]: E1216 12:30:36.859017 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:30:36.892576 containerd[1921]: time="2025-12-16T12:30:36.892495758Z" level=info msg="connecting to shim 16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1" address="unix:///run/containerd/s/8e516a24252c2045a40439c8e7514dd4485659a9d50436aad978290d3e186707" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:36.936725 systemd[1]: Started cri-containerd-16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1.scope - libcontainer container 16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1. Dec 16 12:30:37.029056 containerd[1921]: time="2025-12-16T12:30:37.029002909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d45bz,Uid:57fe338c-87eb-41e3-8a54-022000644fe1,Namespace:kube-system,Attempt:0,} returns sandbox id \"16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1\"" Dec 16 12:30:37.039168 containerd[1921]: time="2025-12-16T12:30:37.038720086Z" level=info msg="CreateContainer within sandbox \"16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:30:37.065791 containerd[1921]: time="2025-12-16T12:30:37.065742536Z" level=info msg="Container c836fcc1a0cd48a4a18cbce19c1d763c2f7b7a3bef8466ea70c8f3f36133b436: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:37.083581 containerd[1921]: time="2025-12-16T12:30:37.083473140Z" level=info msg="CreateContainer within sandbox \"16194432bd64b54318ce8f1b78cba913e8689f0bd2479566d874a607f55982c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c836fcc1a0cd48a4a18cbce19c1d763c2f7b7a3bef8466ea70c8f3f36133b436\"" Dec 16 12:30:37.086214 containerd[1921]: time="2025-12-16T12:30:37.086182598Z" level=info msg="StartContainer for \"c836fcc1a0cd48a4a18cbce19c1d763c2f7b7a3bef8466ea70c8f3f36133b436\"" Dec 16 12:30:37.088069 containerd[1921]: time="2025-12-16T12:30:37.088034385Z" level=info msg="connecting to shim c836fcc1a0cd48a4a18cbce19c1d763c2f7b7a3bef8466ea70c8f3f36133b436" address="unix:///run/containerd/s/8e516a24252c2045a40439c8e7514dd4485659a9d50436aad978290d3e186707" protocol=ttrpc version=3 Dec 16 12:30:37.109723 systemd[1]: Started cri-containerd-c836fcc1a0cd48a4a18cbce19c1d763c2f7b7a3bef8466ea70c8f3f36133b436.scope - libcontainer container c836fcc1a0cd48a4a18cbce19c1d763c2f7b7a3bef8466ea70c8f3f36133b436. Dec 16 12:30:37.147880 containerd[1921]: time="2025-12-16T12:30:37.147837481Z" level=info msg="StartContainer for \"c836fcc1a0cd48a4a18cbce19c1d763c2f7b7a3bef8466ea70c8f3f36133b436\" returns successfully" Dec 16 12:30:37.371741 systemd-networkd[1485]: calicb5c8a2607b: Gained IPv6LL Dec 16 12:30:37.435799 systemd-networkd[1485]: cali9dfd06db068: Gained IPv6LL Dec 16 12:30:37.837854 kubelet[3449]: E1216 12:30:37.837809 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:30:37.841112 kubelet[3449]: E1216 12:30:37.841072 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:30:37.854959 kubelet[3449]: I1216 12:30:37.854397 3449 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-d45bz" podStartSLOduration=42.854381644 podStartE2EDuration="42.854381644s" podCreationTimestamp="2025-12-16 12:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:37.854244312 +0000 UTC m=+48.270625744" watchObservedRunningTime="2025-12-16 12:30:37.854381644 +0000 UTC m=+48.270763076" Dec 16 12:30:38.140069 systemd-networkd[1485]: cali265a2b68968: Gained IPv6LL Dec 16 12:30:42.676689 containerd[1921]: time="2025-12-16T12:30:42.676635631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:30:42.965792 containerd[1921]: time="2025-12-16T12:30:42.965410679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:42.970117 containerd[1921]: time="2025-12-16T12:30:42.970068501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:30:42.970226 containerd[1921]: time="2025-12-16T12:30:42.970158175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:30:42.971266 kubelet[3449]: E1216 12:30:42.971226 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:30:42.972122 kubelet[3449]: E1216 12:30:42.971654 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:30:42.972122 kubelet[3449]: E1216 12:30:42.971759 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7b8547f758-vr28q_calico-system(435c02b4-b196-420c-b959-5f71a5c70015): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:42.972696 containerd[1921]: time="2025-12-16T12:30:42.972665339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:30:43.240673 containerd[1921]: time="2025-12-16T12:30:43.240178719Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:43.244181 containerd[1921]: time="2025-12-16T12:30:43.244070248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:30:43.244181 containerd[1921]: time="2025-12-16T12:30:43.244116289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:30:43.244471 kubelet[3449]: E1216 12:30:43.244429 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:30:43.244613 kubelet[3449]: E1216 12:30:43.244587 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:30:43.244807 kubelet[3449]: E1216 12:30:43.244759 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7b8547f758-vr28q_calico-system(435c02b4-b196-420c-b959-5f71a5c70015): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:43.245075 kubelet[3449]: E1216 12:30:43.245021 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:30:47.678226 containerd[1921]: time="2025-12-16T12:30:47.677091865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:30:47.957011 containerd[1921]: time="2025-12-16T12:30:47.956739616Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:47.961099 containerd[1921]: time="2025-12-16T12:30:47.960987108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:30:47.961099 containerd[1921]: time="2025-12-16T12:30:47.961045245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:47.961274 kubelet[3449]: E1216 12:30:47.961241 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:47.961521 kubelet[3449]: E1216 12:30:47.961285 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:47.961521 kubelet[3449]: E1216 12:30:47.961379 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6879cffd7b-mkgxz_calico-apiserver(12a67fa6-b880-41bc-a39b-0d6c266384bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:47.961521 kubelet[3449]: E1216 12:30:47.961407 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:30:49.679816 containerd[1921]: time="2025-12-16T12:30:49.679773922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:30:49.948505 containerd[1921]: time="2025-12-16T12:30:49.947983648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:49.952370 containerd[1921]: time="2025-12-16T12:30:49.952262445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:30:49.952370 containerd[1921]: time="2025-12-16T12:30:49.952322798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:49.952583 kubelet[3449]: E1216 12:30:49.952524 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:49.952878 kubelet[3449]: E1216 12:30:49.952589 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:49.952878 kubelet[3449]: E1216 12:30:49.952668 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6879cffd7b-88t7g_calico-apiserver(8aede75b-84c4-4230-866b-6bdfa406b3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:49.952878 kubelet[3449]: E1216 12:30:49.952699 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:30:50.677096 containerd[1921]: time="2025-12-16T12:30:50.677012510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:30:50.935022 containerd[1921]: time="2025-12-16T12:30:50.934502854Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:50.938477 containerd[1921]: time="2025-12-16T12:30:50.938355872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:30:50.938477 containerd[1921]: time="2025-12-16T12:30:50.938366160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:30:50.938663 kubelet[3449]: E1216 12:30:50.938604 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:30:50.938663 kubelet[3449]: E1216 12:30:50.938655 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:30:50.938750 kubelet[3449]: E1216 12:30:50.938728 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-544fc6cbc8-s5srh_calico-system(746d77af-37dd-4af0-98d6-e8786f6ddd62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:50.938834 kubelet[3449]: E1216 12:30:50.938759 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:30:51.678977 containerd[1921]: time="2025-12-16T12:30:51.678738805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:30:51.900323 containerd[1921]: time="2025-12-16T12:30:51.900275230Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:51.903875 containerd[1921]: time="2025-12-16T12:30:51.903789430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:30:51.903875 containerd[1921]: time="2025-12-16T12:30:51.903844904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:51.904093 kubelet[3449]: E1216 12:30:51.904040 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:51.904093 kubelet[3449]: E1216 12:30:51.904089 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:51.905034 kubelet[3449]: E1216 12:30:51.904637 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-d9x7j_calico-system(4dc67b6f-ad58-4f41-a48a-50245539bb0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:51.905034 kubelet[3449]: E1216 12:30:51.904691 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:30:51.905132 containerd[1921]: time="2025-12-16T12:30:51.904897228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:30:52.245895 containerd[1921]: time="2025-12-16T12:30:52.245839239Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:52.249539 containerd[1921]: time="2025-12-16T12:30:52.249488611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:30:52.249776 containerd[1921]: time="2025-12-16T12:30:52.249591094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:30:52.249971 kubelet[3449]: E1216 12:30:52.249862 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:52.249971 kubelet[3449]: E1216 12:30:52.249925 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:52.250122 kubelet[3449]: E1216 12:30:52.250105 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-fbpsm_calico-system(8dc2a4ce-1cc0-4206-a57c-f0513b577cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:52.251956 containerd[1921]: time="2025-12-16T12:30:52.251932414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:30:52.522750 containerd[1921]: time="2025-12-16T12:30:52.522390721Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:52.526768 containerd[1921]: time="2025-12-16T12:30:52.526715695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:30:52.526947 containerd[1921]: time="2025-12-16T12:30:52.526734872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:30:52.527288 kubelet[3449]: E1216 12:30:52.527249 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:52.527473 kubelet[3449]: E1216 12:30:52.527458 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:52.527849 kubelet[3449]: E1216 12:30:52.527658 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-fbpsm_calico-system(8dc2a4ce-1cc0-4206-a57c-f0513b577cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:52.527849 kubelet[3449]: E1216 12:30:52.527700 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:30:56.677824 kubelet[3449]: E1216 12:30:56.677543 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:31:01.678722 kubelet[3449]: E1216 12:31:01.678131 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:31:02.675685 kubelet[3449]: E1216 12:31:02.675633 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:31:05.681697 kubelet[3449]: E1216 12:31:05.681322 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:31:05.682132 kubelet[3449]: E1216 12:31:05.681786 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:31:06.677079 kubelet[3449]: E1216 12:31:06.677031 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:31:07.677010 containerd[1921]: time="2025-12-16T12:31:07.676611983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:31:07.954789 containerd[1921]: time="2025-12-16T12:31:07.954427542Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:07.957943 containerd[1921]: time="2025-12-16T12:31:07.957905516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:31:07.958012 containerd[1921]: time="2025-12-16T12:31:07.957983982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:31:07.958189 kubelet[3449]: E1216 12:31:07.958146 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:31:07.958448 kubelet[3449]: E1216 12:31:07.958196 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:31:07.958448 kubelet[3449]: E1216 12:31:07.958262 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7b8547f758-vr28q_calico-system(435c02b4-b196-420c-b959-5f71a5c70015): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:07.959522 containerd[1921]: time="2025-12-16T12:31:07.959496094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:31:08.228411 containerd[1921]: time="2025-12-16T12:31:08.228134040Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:08.231569 containerd[1921]: time="2025-12-16T12:31:08.231505146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:31:08.231667 containerd[1921]: time="2025-12-16T12:31:08.231573868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:31:08.232397 kubelet[3449]: E1216 12:31:08.232191 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:31:08.232397 kubelet[3449]: E1216 12:31:08.232240 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:31:08.232397 kubelet[3449]: E1216 12:31:08.232321 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7b8547f758-vr28q_calico-system(435c02b4-b196-420c-b959-5f71a5c70015): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:08.232616 kubelet[3449]: E1216 12:31:08.232353 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:31:13.678695 containerd[1921]: time="2025-12-16T12:31:13.676442026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:31:13.922998 containerd[1921]: time="2025-12-16T12:31:13.922950400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:13.926719 containerd[1921]: time="2025-12-16T12:31:13.926677868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:31:13.926832 containerd[1921]: time="2025-12-16T12:31:13.926745829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:31:13.926927 kubelet[3449]: E1216 12:31:13.926882 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:31:13.927201 kubelet[3449]: E1216 12:31:13.926927 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:31:13.927201 kubelet[3449]: E1216 12:31:13.927016 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6879cffd7b-mkgxz_calico-apiserver(12a67fa6-b880-41bc-a39b-0d6c266384bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:13.927201 kubelet[3449]: E1216 12:31:13.927044 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:31:16.675580 containerd[1921]: time="2025-12-16T12:31:16.675523668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:31:16.917351 containerd[1921]: time="2025-12-16T12:31:16.917284810Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:16.921307 containerd[1921]: time="2025-12-16T12:31:16.921258381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:31:16.921487 containerd[1921]: time="2025-12-16T12:31:16.921315550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:31:16.921707 kubelet[3449]: E1216 12:31:16.921653 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:31:16.922388 kubelet[3449]: E1216 12:31:16.922012 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:31:16.922388 kubelet[3449]: E1216 12:31:16.922107 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6879cffd7b-88t7g_calico-apiserver(8aede75b-84c4-4230-866b-6bdfa406b3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:16.922534 kubelet[3449]: E1216 12:31:16.922508 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:31:17.678671 containerd[1921]: time="2025-12-16T12:31:17.677200188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:31:17.937950 containerd[1921]: time="2025-12-16T12:31:17.937471122Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:17.941376 containerd[1921]: time="2025-12-16T12:31:17.941288289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:31:17.941376 containerd[1921]: time="2025-12-16T12:31:17.941351787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:31:17.942075 kubelet[3449]: E1216 12:31:17.942035 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:31:17.942848 kubelet[3449]: E1216 12:31:17.942080 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:31:17.942848 kubelet[3449]: E1216 12:31:17.942145 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-d9x7j_calico-system(4dc67b6f-ad58-4f41-a48a-50245539bb0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:17.942848 kubelet[3449]: E1216 12:31:17.942172 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:31:18.677079 containerd[1921]: time="2025-12-16T12:31:18.677038855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:31:18.933955 containerd[1921]: time="2025-12-16T12:31:18.933599672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:18.938119 containerd[1921]: time="2025-12-16T12:31:18.938061632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:31:18.938407 kubelet[3449]: E1216 12:31:18.938364 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:31:18.938475 containerd[1921]: time="2025-12-16T12:31:18.938096529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:31:18.938581 kubelet[3449]: E1216 12:31:18.938538 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:31:18.938725 kubelet[3449]: E1216 12:31:18.938708 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-fbpsm_calico-system(8dc2a4ce-1cc0-4206-a57c-f0513b577cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:18.939551 containerd[1921]: time="2025-12-16T12:31:18.939497670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:31:19.209489 containerd[1921]: time="2025-12-16T12:31:19.209369092Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:19.212910 containerd[1921]: time="2025-12-16T12:31:19.212774200Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:31:19.212910 containerd[1921]: time="2025-12-16T12:31:19.212857258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:31:19.213097 kubelet[3449]: E1216 12:31:19.213052 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:31:19.213914 kubelet[3449]: E1216 12:31:19.213115 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:31:19.213914 kubelet[3449]: E1216 12:31:19.213197 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-fbpsm_calico-system(8dc2a4ce-1cc0-4206-a57c-f0513b577cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:19.213914 kubelet[3449]: E1216 12:31:19.213231 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:31:20.676803 containerd[1921]: time="2025-12-16T12:31:20.676759722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:31:20.966470 containerd[1921]: time="2025-12-16T12:31:20.966320999Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:20.971267 containerd[1921]: time="2025-12-16T12:31:20.971213410Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:31:20.971371 containerd[1921]: time="2025-12-16T12:31:20.971307340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:31:20.971496 kubelet[3449]: E1216 12:31:20.971449 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:31:20.971831 kubelet[3449]: E1216 12:31:20.971496 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:31:20.971831 kubelet[3449]: E1216 12:31:20.971597 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-544fc6cbc8-s5srh_calico-system(746d77af-37dd-4af0-98d6-e8786f6ddd62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:20.972000 kubelet[3449]: E1216 12:31:20.971628 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:31:22.677181 kubelet[3449]: E1216 12:31:22.677127 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:31:26.675219 kubelet[3449]: E1216 12:31:26.675169 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:31:30.676698 kubelet[3449]: E1216 12:31:30.676628 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:31:31.679740 kubelet[3449]: E1216 12:31:31.679702 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:31:32.676221 kubelet[3449]: E1216 12:31:32.676051 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:31:32.676221 kubelet[3449]: E1216 12:31:32.676051 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:31:34.677849 kubelet[3449]: E1216 12:31:34.677752 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:31:35.214727 systemd[1]: Started sshd@7-10.200.20.4:22-10.200.16.10:55990.service - OpenSSH per-connection server daemon (10.200.16.10:55990). Dec 16 12:31:35.706835 sshd[5523]: Accepted publickey for core from 10.200.16.10 port 55990 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:35.709815 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:35.717666 systemd-logind[1858]: New session 10 of user core. Dec 16 12:31:35.720932 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:31:36.131966 sshd[5526]: Connection closed by 10.200.16.10 port 55990 Dec 16 12:31:36.132868 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:36.138916 systemd[1]: sshd@7-10.200.20.4:22-10.200.16.10:55990.service: Deactivated successfully. Dec 16 12:31:36.143284 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:31:36.145531 systemd-logind[1858]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:31:36.148129 systemd-logind[1858]: Removed session 10. Dec 16 12:31:39.678082 kubelet[3449]: E1216 12:31:39.677926 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:31:41.221465 systemd[1]: Started sshd@8-10.200.20.4:22-10.200.16.10:59502.service - OpenSSH per-connection server daemon (10.200.16.10:59502). Dec 16 12:31:41.677710 kubelet[3449]: E1216 12:31:41.677662 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:31:41.715457 sshd[5541]: Accepted publickey for core from 10.200.16.10 port 59502 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:41.743114 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:41.747491 systemd-logind[1858]: New session 11 of user core. Dec 16 12:31:41.755681 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:31:42.106926 sshd[5545]: Connection closed by 10.200.16.10 port 59502 Dec 16 12:31:42.107475 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:42.112450 systemd[1]: sshd@8-10.200.20.4:22-10.200.16.10:59502.service: Deactivated successfully. Dec 16 12:31:42.114923 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:31:42.117642 systemd-logind[1858]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:31:42.119135 systemd-logind[1858]: Removed session 11. Dec 16 12:31:44.676633 kubelet[3449]: E1216 12:31:44.676351 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:31:44.676633 kubelet[3449]: E1216 12:31:44.676569 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:31:46.676431 kubelet[3449]: E1216 12:31:46.676258 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:31:47.192353 systemd[1]: Started sshd@9-10.200.20.4:22-10.200.16.10:59504.service - OpenSSH per-connection server daemon (10.200.16.10:59504). Dec 16 12:31:47.649398 sshd[5558]: Accepted publickey for core from 10.200.16.10 port 59504 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:47.650484 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:47.654364 systemd-logind[1858]: New session 12 of user core. Dec 16 12:31:47.667694 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:31:48.035927 sshd[5561]: Connection closed by 10.200.16.10 port 59504 Dec 16 12:31:48.036716 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:48.040050 systemd[1]: sshd@9-10.200.20.4:22-10.200.16.10:59504.service: Deactivated successfully. Dec 16 12:31:48.042501 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:31:48.044682 systemd-logind[1858]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:31:48.046085 systemd-logind[1858]: Removed session 12. Dec 16 12:31:48.675816 containerd[1921]: time="2025-12-16T12:31:48.675767854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:31:48.923593 containerd[1921]: time="2025-12-16T12:31:48.923066787Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:48.927801 containerd[1921]: time="2025-12-16T12:31:48.927677503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:31:48.927801 containerd[1921]: time="2025-12-16T12:31:48.927763385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:31:48.928482 kubelet[3449]: E1216 12:31:48.928138 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:31:48.928482 kubelet[3449]: E1216 12:31:48.928194 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:31:48.928482 kubelet[3449]: E1216 12:31:48.928280 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7b8547f758-vr28q_calico-system(435c02b4-b196-420c-b959-5f71a5c70015): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:48.930317 containerd[1921]: time="2025-12-16T12:31:48.930248348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:31:49.204627 containerd[1921]: time="2025-12-16T12:31:49.204191003Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:49.207868 containerd[1921]: time="2025-12-16T12:31:49.207758850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:31:49.208032 containerd[1921]: time="2025-12-16T12:31:49.207948800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:31:49.208195 kubelet[3449]: E1216 12:31:49.208151 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:31:49.208255 kubelet[3449]: E1216 12:31:49.208199 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:31:49.208316 kubelet[3449]: E1216 12:31:49.208295 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7b8547f758-vr28q_calico-system(435c02b4-b196-420c-b959-5f71a5c70015): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:49.208690 kubelet[3449]: E1216 12:31:49.208663 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:31:53.126981 systemd[1]: Started sshd@10-10.200.20.4:22-10.200.16.10:44840.service - OpenSSH per-connection server daemon (10.200.16.10:44840). Dec 16 12:31:53.620527 sshd[5582]: Accepted publickey for core from 10.200.16.10 port 44840 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:53.621693 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:53.628634 systemd-logind[1858]: New session 13 of user core. Dec 16 12:31:53.632922 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:31:54.022573 sshd[5585]: Connection closed by 10.200.16.10 port 44840 Dec 16 12:31:54.023111 sshd-session[5582]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:54.027676 systemd[1]: sshd@10-10.200.20.4:22-10.200.16.10:44840.service: Deactivated successfully. Dec 16 12:31:54.030623 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:31:54.031493 systemd-logind[1858]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:31:54.033221 systemd-logind[1858]: Removed session 13. Dec 16 12:31:54.676393 containerd[1921]: time="2025-12-16T12:31:54.675515093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:31:54.943924 containerd[1921]: time="2025-12-16T12:31:54.943761932Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:54.948429 containerd[1921]: time="2025-12-16T12:31:54.948263645Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:31:54.948429 containerd[1921]: time="2025-12-16T12:31:54.948319118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:31:54.949692 kubelet[3449]: E1216 12:31:54.948472 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:31:54.949692 kubelet[3449]: E1216 12:31:54.948518 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:31:54.949692 kubelet[3449]: E1216 12:31:54.948606 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6879cffd7b-mkgxz_calico-apiserver(12a67fa6-b880-41bc-a39b-0d6c266384bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:54.949692 kubelet[3449]: E1216 12:31:54.948635 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:31:56.676399 kubelet[3449]: E1216 12:31:56.676351 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:31:57.679817 kubelet[3449]: E1216 12:31:57.678977 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:31:58.676200 containerd[1921]: time="2025-12-16T12:31:58.676022542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:31:58.934052 containerd[1921]: time="2025-12-16T12:31:58.933673100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:58.941496 containerd[1921]: time="2025-12-16T12:31:58.941385179Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:31:58.941496 containerd[1921]: time="2025-12-16T12:31:58.941468229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:31:58.941832 kubelet[3449]: E1216 12:31:58.941787 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:31:58.943198 kubelet[3449]: E1216 12:31:58.941838 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:31:58.943198 kubelet[3449]: E1216 12:31:58.941912 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-d9x7j_calico-system(4dc67b6f-ad58-4f41-a48a-50245539bb0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:58.943198 kubelet[3449]: E1216 12:31:58.941948 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:31:59.111312 systemd[1]: Started sshd@11-10.200.20.4:22-10.200.16.10:44846.service - OpenSSH per-connection server daemon (10.200.16.10:44846). Dec 16 12:31:59.599957 sshd[5617]: Accepted publickey for core from 10.200.16.10 port 44846 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:59.600980 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:59.604794 systemd-logind[1858]: New session 14 of user core. Dec 16 12:31:59.610803 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:32:00.026212 sshd[5620]: Connection closed by 10.200.16.10 port 44846 Dec 16 12:32:00.026741 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:00.032933 systemd[1]: sshd@11-10.200.20.4:22-10.200.16.10:44846.service: Deactivated successfully. Dec 16 12:32:00.035427 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:32:00.036188 systemd-logind[1858]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:32:00.037970 systemd-logind[1858]: Removed session 14. Dec 16 12:32:00.108778 systemd[1]: Started sshd@12-10.200.20.4:22-10.200.16.10:60682.service - OpenSSH per-connection server daemon (10.200.16.10:60682). Dec 16 12:32:00.560205 sshd[5633]: Accepted publickey for core from 10.200.16.10 port 60682 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:00.561352 sshd-session[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:00.565443 systemd-logind[1858]: New session 15 of user core. Dec 16 12:32:00.572699 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:32:00.675918 containerd[1921]: time="2025-12-16T12:32:00.675398342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:32:00.906517 containerd[1921]: time="2025-12-16T12:32:00.906168436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:00.910131 containerd[1921]: time="2025-12-16T12:32:00.909930673Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:32:00.910131 containerd[1921]: time="2025-12-16T12:32:00.910021267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:32:00.910787 kubelet[3449]: E1216 12:32:00.910584 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:32:00.910787 kubelet[3449]: E1216 12:32:00.910700 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:32:00.911777 kubelet[3449]: E1216 12:32:00.911693 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6879cffd7b-88t7g_calico-apiserver(8aede75b-84c4-4230-866b-6bdfa406b3b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:00.911777 kubelet[3449]: E1216 12:32:00.911736 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:32:00.968538 sshd[5636]: Connection closed by 10.200.16.10 port 60682 Dec 16 12:32:00.969084 sshd-session[5633]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:00.972419 systemd-logind[1858]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:32:00.972984 systemd[1]: sshd@12-10.200.20.4:22-10.200.16.10:60682.service: Deactivated successfully. Dec 16 12:32:00.977863 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:32:00.981385 systemd-logind[1858]: Removed session 15. Dec 16 12:32:01.069785 systemd[1]: Started sshd@13-10.200.20.4:22-10.200.16.10:60694.service - OpenSSH per-connection server daemon (10.200.16.10:60694). Dec 16 12:32:01.567581 sshd[5652]: Accepted publickey for core from 10.200.16.10 port 60694 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:01.568499 sshd-session[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:01.574748 systemd-logind[1858]: New session 16 of user core. Dec 16 12:32:01.578703 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:32:01.971308 sshd[5672]: Connection closed by 10.200.16.10 port 60694 Dec 16 12:32:01.971831 sshd-session[5652]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:01.975353 systemd-logind[1858]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:32:01.975687 systemd[1]: sshd@13-10.200.20.4:22-10.200.16.10:60694.service: Deactivated successfully. Dec 16 12:32:01.978174 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:32:01.981955 systemd-logind[1858]: Removed session 16. Dec 16 12:32:02.677736 kubelet[3449]: E1216 12:32:02.677675 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:32:06.675758 kubelet[3449]: E1216 12:32:06.675528 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:32:07.053753 systemd[1]: Started sshd@14-10.200.20.4:22-10.200.16.10:60696.service - OpenSSH per-connection server daemon (10.200.16.10:60696). Dec 16 12:32:07.507243 sshd[5692]: Accepted publickey for core from 10.200.16.10 port 60696 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:07.509216 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:07.515201 systemd-logind[1858]: New session 17 of user core. Dec 16 12:32:07.518810 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:32:07.922819 sshd[5695]: Connection closed by 10.200.16.10 port 60696 Dec 16 12:32:07.925516 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:07.929851 systemd[1]: sshd@14-10.200.20.4:22-10.200.16.10:60696.service: Deactivated successfully. Dec 16 12:32:07.934416 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:32:07.935685 systemd-logind[1858]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:32:07.937872 systemd-logind[1858]: Removed session 17. Dec 16 12:32:10.676319 containerd[1921]: time="2025-12-16T12:32:10.675874351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:32:10.914483 containerd[1921]: time="2025-12-16T12:32:10.914426707Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:10.919821 containerd[1921]: time="2025-12-16T12:32:10.919766691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:32:10.919916 containerd[1921]: time="2025-12-16T12:32:10.919860941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:32:10.920102 kubelet[3449]: E1216 12:32:10.920054 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:32:10.920513 kubelet[3449]: E1216 12:32:10.920112 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:32:10.920513 kubelet[3449]: E1216 12:32:10.920188 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-544fc6cbc8-s5srh_calico-system(746d77af-37dd-4af0-98d6-e8786f6ddd62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:10.920513 kubelet[3449]: E1216 12:32:10.920217 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:32:11.679200 containerd[1921]: time="2025-12-16T12:32:11.679157889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:32:11.974197 containerd[1921]: time="2025-12-16T12:32:11.974031769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:11.978096 containerd[1921]: time="2025-12-16T12:32:11.978023292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:32:11.978096 containerd[1921]: time="2025-12-16T12:32:11.978061413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:32:11.978455 kubelet[3449]: E1216 12:32:11.978409 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:32:11.979254 kubelet[3449]: E1216 12:32:11.978463 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:32:11.979254 kubelet[3449]: E1216 12:32:11.978533 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-fbpsm_calico-system(8dc2a4ce-1cc0-4206-a57c-f0513b577cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:11.980174 containerd[1921]: time="2025-12-16T12:32:11.980120997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:32:12.255962 containerd[1921]: time="2025-12-16T12:32:12.255827545Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:12.260841 containerd[1921]: time="2025-12-16T12:32:12.260733629Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:32:12.260841 containerd[1921]: time="2025-12-16T12:32:12.260787487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:32:12.261211 kubelet[3449]: E1216 12:32:12.261141 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:32:12.261211 kubelet[3449]: E1216 12:32:12.261190 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:32:12.261456 kubelet[3449]: E1216 12:32:12.261394 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-fbpsm_calico-system(8dc2a4ce-1cc0-4206-a57c-f0513b577cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:12.261592 kubelet[3449]: E1216 12:32:12.261437 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:32:12.675552 kubelet[3449]: E1216 12:32:12.675416 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:32:13.014046 systemd[1]: Started sshd@15-10.200.20.4:22-10.200.16.10:54236.service - OpenSSH per-connection server daemon (10.200.16.10:54236). Dec 16 12:32:13.511413 sshd[5706]: Accepted publickey for core from 10.200.16.10 port 54236 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:13.512542 sshd-session[5706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:13.517707 systemd-logind[1858]: New session 18 of user core. Dec 16 12:32:13.523691 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:32:13.924218 sshd[5709]: Connection closed by 10.200.16.10 port 54236 Dec 16 12:32:13.924847 sshd-session[5706]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:13.928809 systemd-logind[1858]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:32:13.928957 systemd[1]: sshd@15-10.200.20.4:22-10.200.16.10:54236.service: Deactivated successfully. Dec 16 12:32:13.931544 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:32:13.933272 systemd-logind[1858]: Removed session 18. Dec 16 12:32:14.676248 kubelet[3449]: E1216 12:32:14.676188 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:32:14.677805 kubelet[3449]: E1216 12:32:14.677175 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:32:18.676624 kubelet[3449]: E1216 12:32:18.676353 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:32:19.013793 systemd[1]: Started sshd@16-10.200.20.4:22-10.200.16.10:54250.service - OpenSSH per-connection server daemon (10.200.16.10:54250). Dec 16 12:32:19.497596 sshd[5726]: Accepted publickey for core from 10.200.16.10 port 54250 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:19.501148 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:19.506918 systemd-logind[1858]: New session 19 of user core. Dec 16 12:32:19.510682 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:32:19.899735 sshd[5729]: Connection closed by 10.200.16.10 port 54250 Dec 16 12:32:19.900417 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:19.904613 systemd[1]: sshd@16-10.200.20.4:22-10.200.16.10:54250.service: Deactivated successfully. Dec 16 12:32:19.906737 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:32:19.907818 systemd-logind[1858]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:32:19.909519 systemd-logind[1858]: Removed session 19. Dec 16 12:32:22.675279 kubelet[3449]: E1216 12:32:22.675078 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:32:24.677387 kubelet[3449]: E1216 12:32:24.676690 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:32:24.995442 systemd[1]: Started sshd@17-10.200.20.4:22-10.200.16.10:58182.service - OpenSSH per-connection server daemon (10.200.16.10:58182). Dec 16 12:32:25.486619 sshd[5743]: Accepted publickey for core from 10.200.16.10 port 58182 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:25.487726 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:25.492746 systemd-logind[1858]: New session 20 of user core. Dec 16 12:32:25.497700 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:32:25.677074 kubelet[3449]: E1216 12:32:25.677022 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:32:25.902692 sshd[5746]: Connection closed by 10.200.16.10 port 58182 Dec 16 12:32:25.903378 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:25.907692 systemd[1]: sshd@17-10.200.20.4:22-10.200.16.10:58182.service: Deactivated successfully. Dec 16 12:32:25.912449 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:32:25.915510 systemd-logind[1858]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:32:25.916879 systemd-logind[1858]: Removed session 20. Dec 16 12:32:26.675284 kubelet[3449]: E1216 12:32:26.675239 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:32:29.676246 kubelet[3449]: E1216 12:32:29.676078 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:32:30.992761 systemd[1]: Started sshd@18-10.200.20.4:22-10.200.16.10:42062.service - OpenSSH per-connection server daemon (10.200.16.10:42062). Dec 16 12:32:31.487637 sshd[5760]: Accepted publickey for core from 10.200.16.10 port 42062 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:31.504160 sshd-session[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:31.509793 systemd-logind[1858]: New session 21 of user core. Dec 16 12:32:31.514732 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:32:31.675722 kubelet[3449]: E1216 12:32:31.675674 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:32:31.894979 sshd[5788]: Connection closed by 10.200.16.10 port 42062 Dec 16 12:32:31.895768 sshd-session[5760]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:31.900076 systemd-logind[1858]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:32:31.901069 systemd[1]: sshd@18-10.200.20.4:22-10.200.16.10:42062.service: Deactivated successfully. Dec 16 12:32:31.904274 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:32:31.906473 systemd-logind[1858]: Removed session 21. Dec 16 12:32:36.675925 kubelet[3449]: E1216 12:32:36.675873 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:32:36.978782 systemd[1]: Started sshd@19-10.200.20.4:22-10.200.16.10:42070.service - OpenSSH per-connection server daemon (10.200.16.10:42070). Dec 16 12:32:37.434900 sshd[5799]: Accepted publickey for core from 10.200.16.10 port 42070 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:37.436471 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:37.440156 systemd-logind[1858]: New session 22 of user core. Dec 16 12:32:37.447698 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:32:37.679174 kubelet[3449]: E1216 12:32:37.679109 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:32:37.813537 sshd[5802]: Connection closed by 10.200.16.10 port 42070 Dec 16 12:32:37.814105 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:37.817348 systemd[1]: sshd@19-10.200.20.4:22-10.200.16.10:42070.service: Deactivated successfully. Dec 16 12:32:37.819224 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:32:37.819970 systemd-logind[1858]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:32:37.821083 systemd-logind[1858]: Removed session 22. Dec 16 12:32:37.899449 systemd[1]: Started sshd@20-10.200.20.4:22-10.200.16.10:42084.service - OpenSSH per-connection server daemon (10.200.16.10:42084). Dec 16 12:32:38.392584 sshd[5814]: Accepted publickey for core from 10.200.16.10 port 42084 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:38.393739 sshd-session[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:38.400617 systemd-logind[1858]: New session 23 of user core. Dec 16 12:32:38.410954 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:32:38.906768 sshd[5817]: Connection closed by 10.200.16.10 port 42084 Dec 16 12:32:38.907372 sshd-session[5814]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:38.910591 systemd[1]: sshd@20-10.200.20.4:22-10.200.16.10:42084.service: Deactivated successfully. Dec 16 12:32:38.912346 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:32:38.913065 systemd-logind[1858]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:32:38.914709 systemd-logind[1858]: Removed session 23. Dec 16 12:32:39.010902 systemd[1]: Started sshd@21-10.200.20.4:22-10.200.16.10:42088.service - OpenSSH per-connection server daemon (10.200.16.10:42088). Dec 16 12:32:39.508115 sshd[5827]: Accepted publickey for core from 10.200.16.10 port 42088 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:39.509249 sshd-session[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:39.513313 systemd-logind[1858]: New session 24 of user core. Dec 16 12:32:39.521737 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:32:39.676576 kubelet[3449]: E1216 12:32:39.676456 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:32:40.317929 sshd[5830]: Connection closed by 10.200.16.10 port 42088 Dec 16 12:32:40.320298 sshd-session[5827]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:40.325241 systemd[1]: sshd@21-10.200.20.4:22-10.200.16.10:42088.service: Deactivated successfully. Dec 16 12:32:40.325438 systemd-logind[1858]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:32:40.329160 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:32:40.330603 systemd-logind[1858]: Removed session 24. Dec 16 12:32:40.406349 systemd[1]: Started sshd@22-10.200.20.4:22-10.200.16.10:57816.service - OpenSSH per-connection server daemon (10.200.16.10:57816). Dec 16 12:32:40.677070 kubelet[3449]: E1216 12:32:40.675499 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:32:40.916595 sshd[5850]: Accepted publickey for core from 10.200.16.10 port 57816 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:40.917534 sshd-session[5850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:40.921828 systemd-logind[1858]: New session 25 of user core. Dec 16 12:32:40.927704 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:32:41.407268 sshd[5853]: Connection closed by 10.200.16.10 port 57816 Dec 16 12:32:41.407643 sshd-session[5850]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:41.412066 systemd-logind[1858]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:32:41.412907 systemd[1]: sshd@22-10.200.20.4:22-10.200.16.10:57816.service: Deactivated successfully. Dec 16 12:32:41.414943 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:32:41.416255 systemd-logind[1858]: Removed session 25. Dec 16 12:32:41.494269 systemd[1]: Started sshd@23-10.200.20.4:22-10.200.16.10:57824.service - OpenSSH per-connection server daemon (10.200.16.10:57824). Dec 16 12:32:41.680131 kubelet[3449]: E1216 12:32:41.677581 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:32:41.956818 sshd[5865]: Accepted publickey for core from 10.200.16.10 port 57824 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:41.959076 sshd-session[5865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:41.964221 systemd-logind[1858]: New session 26 of user core. Dec 16 12:32:41.968703 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:32:42.333903 sshd[5868]: Connection closed by 10.200.16.10 port 57824 Dec 16 12:32:42.334431 sshd-session[5865]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:42.337351 systemd-logind[1858]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:32:42.337510 systemd[1]: sshd@23-10.200.20.4:22-10.200.16.10:57824.service: Deactivated successfully. Dec 16 12:32:42.339421 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:32:42.344874 systemd-logind[1858]: Removed session 26. Dec 16 12:32:46.675607 kubelet[3449]: E1216 12:32:46.674842 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:32:47.423365 systemd[1]: Started sshd@24-10.200.20.4:22-10.200.16.10:57832.service - OpenSSH per-connection server daemon (10.200.16.10:57832). Dec 16 12:32:47.922918 sshd[5881]: Accepted publickey for core from 10.200.16.10 port 57832 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:47.924494 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:47.930300 systemd-logind[1858]: New session 27 of user core. Dec 16 12:32:47.933690 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 16 12:32:48.313597 sshd[5884]: Connection closed by 10.200.16.10 port 57832 Dec 16 12:32:48.314185 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:48.319623 systemd[1]: sshd@24-10.200.20.4:22-10.200.16.10:57832.service: Deactivated successfully. Dec 16 12:32:48.322767 systemd[1]: session-27.scope: Deactivated successfully. Dec 16 12:32:48.324212 systemd-logind[1858]: Session 27 logged out. Waiting for processes to exit. Dec 16 12:32:48.325469 systemd-logind[1858]: Removed session 27. Dec 16 12:32:49.676610 kubelet[3449]: E1216 12:32:49.675820 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:32:52.675676 kubelet[3449]: E1216 12:32:52.675622 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:32:52.676650 kubelet[3449]: E1216 12:32:52.676615 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:32:53.402251 systemd[1]: Started sshd@25-10.200.20.4:22-10.200.16.10:46334.service - OpenSSH per-connection server daemon (10.200.16.10:46334). Dec 16 12:32:53.677815 kubelet[3449]: E1216 12:32:53.677366 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:32:53.894397 sshd[5898]: Accepted publickey for core from 10.200.16.10 port 46334 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:53.895587 sshd-session[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:53.900621 systemd-logind[1858]: New session 28 of user core. Dec 16 12:32:53.905685 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 16 12:32:54.323116 sshd[5901]: Connection closed by 10.200.16.10 port 46334 Dec 16 12:32:54.323467 sshd-session[5898]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:54.327899 systemd-logind[1858]: Session 28 logged out. Waiting for processes to exit. Dec 16 12:32:54.328480 systemd[1]: sshd@25-10.200.20.4:22-10.200.16.10:46334.service: Deactivated successfully. Dec 16 12:32:54.330515 systemd[1]: session-28.scope: Deactivated successfully. Dec 16 12:32:54.334816 systemd-logind[1858]: Removed session 28. Dec 16 12:32:56.676917 kubelet[3449]: E1216 12:32:56.676838 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:32:59.416061 systemd[1]: Started sshd@26-10.200.20.4:22-10.200.16.10:46340.service - OpenSSH per-connection server daemon (10.200.16.10:46340). Dec 16 12:32:59.676704 kubelet[3449]: E1216 12:32:59.676577 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:32:59.916810 sshd[5917]: Accepted publickey for core from 10.200.16.10 port 46340 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:59.917969 sshd-session[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:59.921854 systemd-logind[1858]: New session 29 of user core. Dec 16 12:32:59.928804 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 16 12:33:00.319070 sshd[5922]: Connection closed by 10.200.16.10 port 46340 Dec 16 12:33:00.319926 sshd-session[5917]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:00.323396 systemd[1]: sshd@26-10.200.20.4:22-10.200.16.10:46340.service: Deactivated successfully. Dec 16 12:33:00.325165 systemd[1]: session-29.scope: Deactivated successfully. Dec 16 12:33:00.327112 systemd-logind[1858]: Session 29 logged out. Waiting for processes to exit. Dec 16 12:33:00.330883 systemd-logind[1858]: Removed session 29. Dec 16 12:33:02.676159 kubelet[3449]: E1216 12:33:02.675406 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:33:05.404776 systemd[1]: Started sshd@27-10.200.20.4:22-10.200.16.10:57740.service - OpenSSH per-connection server daemon (10.200.16.10:57740). Dec 16 12:33:05.676605 kubelet[3449]: E1216 12:33:05.676317 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:33:05.678895 kubelet[3449]: E1216 12:33:05.678680 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:33:05.860634 sshd[5962]: Accepted publickey for core from 10.200.16.10 port 57740 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:05.861378 sshd-session[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:05.866211 systemd-logind[1858]: New session 30 of user core. Dec 16 12:33:05.872834 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 16 12:33:06.260945 sshd[5965]: Connection closed by 10.200.16.10 port 57740 Dec 16 12:33:06.261029 sshd-session[5962]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:06.264273 systemd[1]: sshd@27-10.200.20.4:22-10.200.16.10:57740.service: Deactivated successfully. Dec 16 12:33:06.266113 systemd[1]: session-30.scope: Deactivated successfully. Dec 16 12:33:06.267003 systemd-logind[1858]: Session 30 logged out. Waiting for processes to exit. Dec 16 12:33:06.269046 systemd-logind[1858]: Removed session 30. Dec 16 12:33:07.676903 kubelet[3449]: E1216 12:33:07.676855 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:33:08.677175 kubelet[3449]: E1216 12:33:08.676931 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:33:11.350235 systemd[1]: Started sshd@28-10.200.20.4:22-10.200.16.10:42372.service - OpenSSH per-connection server daemon (10.200.16.10:42372). Dec 16 12:33:11.845772 sshd[5983]: Accepted publickey for core from 10.200.16.10 port 42372 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:11.848532 sshd-session[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:11.852634 systemd-logind[1858]: New session 31 of user core. Dec 16 12:33:11.858917 systemd[1]: Started session-31.scope - Session 31 of User core. Dec 16 12:33:12.256675 sshd[5986]: Connection closed by 10.200.16.10 port 42372 Dec 16 12:33:12.256481 sshd-session[5983]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:12.260105 systemd[1]: sshd@28-10.200.20.4:22-10.200.16.10:42372.service: Deactivated successfully. Dec 16 12:33:12.262272 systemd[1]: session-31.scope: Deactivated successfully. Dec 16 12:33:12.263363 systemd-logind[1858]: Session 31 logged out. Waiting for processes to exit. Dec 16 12:33:12.265864 systemd-logind[1858]: Removed session 31. Dec 16 12:33:13.676192 kubelet[3449]: E1216 12:33:13.676136 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-mkgxz" podUID="12a67fa6-b880-41bc-a39b-0d6c266384bf" Dec 16 12:33:14.675049 kubelet[3449]: E1216 12:33:14.674832 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-544fc6cbc8-s5srh" podUID="746d77af-37dd-4af0-98d6-e8786f6ddd62" Dec 16 12:33:16.676664 kubelet[3449]: E1216 12:33:16.676338 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fbpsm" podUID="8dc2a4ce-1cc0-4206-a57c-f0513b577cd6" Dec 16 12:33:17.347778 systemd[1]: Started sshd@29-10.200.20.4:22-10.200.16.10:42386.service - OpenSSH per-connection server daemon (10.200.16.10:42386). Dec 16 12:33:17.838861 sshd[6000]: Accepted publickey for core from 10.200.16.10 port 42386 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:17.839904 sshd-session[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:17.843830 systemd-logind[1858]: New session 32 of user core. Dec 16 12:33:17.848694 systemd[1]: Started session-32.scope - Session 32 of User core. Dec 16 12:33:18.258021 sshd[6003]: Connection closed by 10.200.16.10 port 42386 Dec 16 12:33:18.258704 sshd-session[6000]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:18.263428 systemd[1]: sshd@29-10.200.20.4:22-10.200.16.10:42386.service: Deactivated successfully. Dec 16 12:33:18.267148 systemd[1]: session-32.scope: Deactivated successfully. Dec 16 12:33:18.269526 systemd-logind[1858]: Session 32 logged out. Waiting for processes to exit. Dec 16 12:33:18.271149 systemd-logind[1858]: Removed session 32. Dec 16 12:33:18.675835 kubelet[3449]: E1216 12:33:18.675790 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6879cffd7b-88t7g" podUID="8aede75b-84c4-4230-866b-6bdfa406b3b1" Dec 16 12:33:18.676460 kubelet[3449]: E1216 12:33:18.676432 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-d9x7j" podUID="4dc67b6f-ad58-4f41-a48a-50245539bb0d" Dec 16 12:33:19.676465 containerd[1921]: time="2025-12-16T12:33:19.676424403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:33:20.048653 containerd[1921]: time="2025-12-16T12:33:20.048606671Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:33:20.052458 containerd[1921]: time="2025-12-16T12:33:20.052413477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:33:20.052517 containerd[1921]: time="2025-12-16T12:33:20.052508887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:33:20.052729 kubelet[3449]: E1216 12:33:20.052674 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:33:20.053349 kubelet[3449]: E1216 12:33:20.053043 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:33:20.053349 kubelet[3449]: E1216 12:33:20.053151 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7b8547f758-vr28q_calico-system(435c02b4-b196-420c-b959-5f71a5c70015): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:33:20.054908 containerd[1921]: time="2025-12-16T12:33:20.054873111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:33:20.388326 containerd[1921]: time="2025-12-16T12:33:20.387687236Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:33:20.392216 containerd[1921]: time="2025-12-16T12:33:20.392176444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:33:20.392521 containerd[1921]: time="2025-12-16T12:33:20.392332232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:33:20.392738 kubelet[3449]: E1216 12:33:20.392624 3449 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:33:20.392738 kubelet[3449]: E1216 12:33:20.392670 3449 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:33:20.393122 kubelet[3449]: E1216 12:33:20.392745 3449 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7b8547f758-vr28q_calico-system(435c02b4-b196-420c-b959-5f71a5c70015): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:33:20.393122 kubelet[3449]: E1216 12:33:20.392777 3449 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b8547f758-vr28q" podUID="435c02b4-b196-420c-b959-5f71a5c70015" Dec 16 12:33:23.348684 systemd[1]: Started sshd@30-10.200.20.4:22-10.200.16.10:37656.service - OpenSSH per-connection server daemon (10.200.16.10:37656). Dec 16 12:33:23.844019 sshd[6015]: Accepted publickey for core from 10.200.16.10 port 37656 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:33:23.845254 sshd-session[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:33:23.849515 systemd-logind[1858]: New session 33 of user core. Dec 16 12:33:23.855712 systemd[1]: Started session-33.scope - Session 33 of User core. Dec 16 12:33:24.238231 sshd[6018]: Connection closed by 10.200.16.10 port 37656 Dec 16 12:33:24.238925 sshd-session[6015]: pam_unix(sshd:session): session closed for user core Dec 16 12:33:24.242186 systemd[1]: sshd@30-10.200.20.4:22-10.200.16.10:37656.service: Deactivated successfully. Dec 16 12:33:24.244381 systemd[1]: session-33.scope: Deactivated successfully. Dec 16 12:33:24.245172 systemd-logind[1858]: Session 33 logged out. Waiting for processes to exit. Dec 16 12:33:24.247214 systemd-logind[1858]: Removed session 33.