Dec 12 17:39:54.120070 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Dec 12 17:39:54.120088 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 12 17:39:54.120094 kernel: KASLR enabled Dec 12 17:39:54.120098 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 12 17:39:54.120102 kernel: printk: legacy bootconsole [pl11] enabled Dec 12 17:39:54.120107 kernel: efi: EFI v2.7 by EDK II Dec 12 17:39:54.120112 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Dec 12 17:39:54.120116 kernel: random: crng init done Dec 12 17:39:54.120120 kernel: secureboot: Secure boot disabled Dec 12 17:39:54.120124 kernel: ACPI: Early table checksum verification disabled Dec 12 17:39:54.120128 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Dec 12 17:39:54.120132 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:39:54.120136 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:39:54.120140 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 12 17:39:54.120146 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:39:54.120150 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:39:54.120154 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:39:54.120158 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:39:54.120162 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:39:54.120167 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:39:54.120172 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 12 17:39:54.120176 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 12 17:39:54.120180 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 12 17:39:54.120184 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:39:54.120188 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 12 17:39:54.120192 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Dec 12 17:39:54.120196 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Dec 12 17:39:54.120201 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 12 17:39:54.120205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 12 17:39:54.120209 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 12 17:39:54.120214 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 12 17:39:54.120218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 12 17:39:54.120222 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 12 17:39:54.120226 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 12 17:39:54.120230 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 12 17:39:54.120234 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 12 17:39:54.120238 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Dec 12 17:39:54.120243 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Dec 12 17:39:54.120247 kernel: Zone ranges: Dec 12 17:39:54.120251 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 12 17:39:54.120258 kernel: DMA32 empty Dec 12 17:39:54.120262 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 12 17:39:54.120266 kernel: Device empty Dec 12 17:39:54.120271 kernel: Movable zone start for each node Dec 12 17:39:54.120275 kernel: Early memory node ranges Dec 12 17:39:54.120279 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 12 17:39:54.120285 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Dec 12 17:39:54.120289 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Dec 12 17:39:54.120293 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Dec 12 17:39:54.120298 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Dec 12 17:39:54.120302 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Dec 12 17:39:54.120306 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 12 17:39:54.120310 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 12 17:39:54.120315 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 12 17:39:54.120319 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Dec 12 17:39:54.120323 kernel: psci: probing for conduit method from ACPI. Dec 12 17:39:54.120328 kernel: psci: PSCIv1.3 detected in firmware. Dec 12 17:39:54.120332 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:39:54.120337 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 12 17:39:54.120342 kernel: psci: SMC Calling Convention v1.4 Dec 12 17:39:54.120346 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 12 17:39:54.120350 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 12 17:39:54.120355 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:39:54.120359 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:39:54.120363 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 12 17:39:54.120368 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:39:54.120372 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Dec 12 17:39:54.120377 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:39:54.120381 kernel: CPU features: detected: Spectre-v4 Dec 12 17:39:54.120385 kernel: CPU features: detected: Spectre-BHB Dec 12 17:39:54.120390 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:39:54.120395 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:39:54.120399 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Dec 12 17:39:54.120403 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:39:54.120408 kernel: alternatives: applying boot alternatives Dec 12 17:39:54.120413 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:39:54.120418 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:39:54.120422 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:39:54.120427 kernel: Fallback order for Node 0: 0 Dec 12 17:39:54.120431 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Dec 12 17:39:54.120436 kernel: Policy zone: Normal Dec 12 17:39:54.120440 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:39:54.120444 kernel: software IO TLB: area num 2. Dec 12 17:39:54.120449 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Dec 12 17:39:54.120453 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 12 17:39:54.120458 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:39:54.120463 kernel: rcu: RCU event tracing is enabled. Dec 12 17:39:54.120467 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 12 17:39:54.120471 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:39:54.120476 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:39:54.120491 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:39:54.120495 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 12 17:39:54.120500 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 17:39:54.120505 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 12 17:39:54.120509 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:39:54.120514 kernel: GICv3: 960 SPIs implemented Dec 12 17:39:54.120518 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:39:54.120522 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:39:54.120527 kernel: GICv3: GICv3 features: 16 PPIs, RSS Dec 12 17:39:54.120531 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Dec 12 17:39:54.120535 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 12 17:39:54.120540 kernel: ITS: No ITS available, not enabling LPIs Dec 12 17:39:54.120544 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:39:54.120549 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Dec 12 17:39:54.120554 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 12 17:39:54.120558 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Dec 12 17:39:54.120563 kernel: Console: colour dummy device 80x25 Dec 12 17:39:54.120567 kernel: printk: legacy console [tty1] enabled Dec 12 17:39:54.120572 kernel: ACPI: Core revision 20240827 Dec 12 17:39:54.120577 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Dec 12 17:39:54.120581 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:39:54.120586 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:39:54.120590 kernel: landlock: Up and running. Dec 12 17:39:54.120595 kernel: SELinux: Initializing. Dec 12 17:39:54.120600 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:39:54.120605 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:39:54.120609 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Dec 12 17:39:54.120614 kernel: Hyper-V: Host Build 10.0.26102.1141-1-0 Dec 12 17:39:54.120622 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 12 17:39:54.120627 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:39:54.120632 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:39:54.120636 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:39:54.120641 kernel: Remapping and enabling EFI services. Dec 12 17:39:54.120646 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:39:54.120650 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:39:54.120656 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 12 17:39:54.120661 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Dec 12 17:39:54.120665 kernel: smp: Brought up 1 node, 2 CPUs Dec 12 17:39:54.120670 kernel: SMP: Total of 2 processors activated. Dec 12 17:39:54.120675 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:39:54.120680 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:39:54.120685 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 12 17:39:54.120690 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:39:54.120695 kernel: CPU features: detected: Common not Private translations Dec 12 17:39:54.120699 kernel: CPU features: detected: CRC32 instructions Dec 12 17:39:54.120704 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Dec 12 17:39:54.120709 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:39:54.120714 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:39:54.120718 kernel: CPU features: detected: Privileged Access Never Dec 12 17:39:54.120724 kernel: CPU features: detected: Speculation barrier (SB) Dec 12 17:39:54.120728 kernel: CPU features: detected: TLB range maintenance instructions Dec 12 17:39:54.120733 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:39:54.120738 kernel: CPU features: detected: Scalable Vector Extension Dec 12 17:39:54.120743 kernel: alternatives: applying system-wide alternatives Dec 12 17:39:54.120747 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 12 17:39:54.120752 kernel: SVE: maximum available vector length 16 bytes per vector Dec 12 17:39:54.120757 kernel: SVE: default vector length 16 bytes per vector Dec 12 17:39:54.120762 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Dec 12 17:39:54.120767 kernel: devtmpfs: initialized Dec 12 17:39:54.120772 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:39:54.120777 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 12 17:39:54.120782 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:39:54.120786 kernel: 0 pages in range for non-PLT usage Dec 12 17:39:54.120791 kernel: 508400 pages in range for PLT usage Dec 12 17:39:54.120796 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:39:54.120800 kernel: SMBIOS 3.1.0 present. Dec 12 17:39:54.120805 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Dec 12 17:39:54.120811 kernel: DMI: Memory slots populated: 2/2 Dec 12 17:39:54.120815 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:39:54.120820 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:39:54.120825 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:39:54.120830 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:39:54.120834 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:39:54.120839 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Dec 12 17:39:54.120844 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:39:54.120849 kernel: cpuidle: using governor menu Dec 12 17:39:54.120854 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:39:54.120859 kernel: ASID allocator initialised with 32768 entries Dec 12 17:39:54.120863 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:39:54.120868 kernel: Serial: AMBA PL011 UART driver Dec 12 17:39:54.120873 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:39:54.120878 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:39:54.120882 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:39:54.120887 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:39:54.120892 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:39:54.120897 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:39:54.120902 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:39:54.120907 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:39:54.120911 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:39:54.120916 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:39:54.120921 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:39:54.120926 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:39:54.120930 kernel: ACPI: Interpreter enabled Dec 12 17:39:54.120936 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:39:54.120940 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:39:54.120945 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:39:54.120950 kernel: printk: legacy bootconsole [pl11] disabled Dec 12 17:39:54.120955 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 12 17:39:54.120959 kernel: ACPI: CPU0 has been hot-added Dec 12 17:39:54.120964 kernel: ACPI: CPU1 has been hot-added Dec 12 17:39:54.120969 kernel: iommu: Default domain type: Translated Dec 12 17:39:54.120974 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:39:54.120978 kernel: efivars: Registered efivars operations Dec 12 17:39:54.120984 kernel: vgaarb: loaded Dec 12 17:39:54.120988 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:39:54.120993 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:39:54.120998 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:39:54.121002 kernel: pnp: PnP ACPI init Dec 12 17:39:54.121007 kernel: pnp: PnP ACPI: found 0 devices Dec 12 17:39:54.121012 kernel: NET: Registered PF_INET protocol family Dec 12 17:39:54.121017 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:39:54.121021 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:39:54.121027 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:39:54.121032 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:39:54.121037 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:39:54.121041 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:39:54.121046 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:39:54.121051 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:39:54.121055 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:39:54.121060 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:39:54.121065 kernel: kvm [1]: HYP mode not available Dec 12 17:39:54.121070 kernel: Initialise system trusted keyrings Dec 12 17:39:54.121075 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:39:54.121080 kernel: Key type asymmetric registered Dec 12 17:39:54.121084 kernel: Asymmetric key parser 'x509' registered Dec 12 17:39:54.121089 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:39:54.121094 kernel: io scheduler mq-deadline registered Dec 12 17:39:54.121099 kernel: io scheduler kyber registered Dec 12 17:39:54.121103 kernel: io scheduler bfq registered Dec 12 17:39:54.121108 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:39:54.121113 kernel: thunder_xcv, ver 1.0 Dec 12 17:39:54.121118 kernel: thunder_bgx, ver 1.0 Dec 12 17:39:54.121123 kernel: nicpf, ver 1.0 Dec 12 17:39:54.121127 kernel: nicvf, ver 1.0 Dec 12 17:39:54.121230 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:39:54.121280 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:39:53 UTC (1765561193) Dec 12 17:39:54.121286 kernel: efifb: probing for efifb Dec 12 17:39:54.121292 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 12 17:39:54.121297 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 12 17:39:54.121302 kernel: efifb: scrolling: redraw Dec 12 17:39:54.121306 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 12 17:39:54.121311 kernel: Console: switching to colour frame buffer device 128x48 Dec 12 17:39:54.121316 kernel: fb0: EFI VGA frame buffer device Dec 12 17:39:54.121321 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 12 17:39:54.121326 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:39:54.121330 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:39:54.121336 kernel: watchdog: NMI not fully supported Dec 12 17:39:54.121341 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:39:54.121345 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:39:54.121350 kernel: Segment Routing with IPv6 Dec 12 17:39:54.121355 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:39:54.121359 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:39:54.121364 kernel: Key type dns_resolver registered Dec 12 17:39:54.121369 kernel: registered taskstats version 1 Dec 12 17:39:54.121373 kernel: Loading compiled-in X.509 certificates Dec 12 17:39:54.121378 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 12 17:39:54.121384 kernel: Demotion targets for Node 0: null Dec 12 17:39:54.121388 kernel: Key type .fscrypt registered Dec 12 17:39:54.121393 kernel: Key type fscrypt-provisioning registered Dec 12 17:39:54.121397 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:39:54.121402 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:39:54.121407 kernel: ima: No architecture policies found Dec 12 17:39:54.121412 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:39:54.121416 kernel: clk: Disabling unused clocks Dec 12 17:39:54.121421 kernel: PM: genpd: Disabling unused power domains Dec 12 17:39:54.121427 kernel: Warning: unable to open an initial console. Dec 12 17:39:54.121432 kernel: Freeing unused kernel memory: 39552K Dec 12 17:39:54.121436 kernel: Run /init as init process Dec 12 17:39:54.121441 kernel: with arguments: Dec 12 17:39:54.121446 kernel: /init Dec 12 17:39:54.121450 kernel: with environment: Dec 12 17:39:54.121455 kernel: HOME=/ Dec 12 17:39:54.121459 kernel: TERM=linux Dec 12 17:39:54.121465 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:39:54.121472 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:39:54.121484 systemd[1]: Detected virtualization microsoft. Dec 12 17:39:54.121490 systemd[1]: Detected architecture arm64. Dec 12 17:39:54.121495 systemd[1]: Running in initrd. Dec 12 17:39:54.121500 systemd[1]: No hostname configured, using default hostname. Dec 12 17:39:54.121505 systemd[1]: Hostname set to . Dec 12 17:39:54.121510 systemd[1]: Initializing machine ID from random generator. Dec 12 17:39:54.121516 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:39:54.121521 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:39:54.121527 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:39:54.121532 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:39:54.121537 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:39:54.121543 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:39:54.121548 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:39:54.121555 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 17:39:54.121560 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 17:39:54.121565 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:39:54.121570 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:39:54.121576 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:39:54.121581 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:39:54.121586 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:39:54.121591 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:39:54.121597 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:39:54.121602 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:39:54.121607 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:39:54.121612 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:39:54.121618 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:39:54.121623 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:39:54.121628 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:39:54.121633 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:39:54.121638 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:39:54.121644 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:39:54.121649 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:39:54.121655 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:39:54.121660 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:39:54.121665 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:39:54.121670 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:39:54.121685 systemd-journald[225]: Collecting audit messages is disabled. Dec 12 17:39:54.121700 systemd-journald[225]: Journal started Dec 12 17:39:54.121713 systemd-journald[225]: Runtime Journal (/run/log/journal/2b65fc29d8514bfb9d1514ee69250c0f) is 8M, max 78.3M, 70.3M free. Dec 12 17:39:54.130928 systemd-modules-load[227]: Inserted module 'overlay' Dec 12 17:39:54.137751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:39:54.150492 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:39:54.150519 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:39:54.162168 kernel: Bridge firewalling registered Dec 12 17:39:54.161565 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:39:54.166807 systemd-modules-load[227]: Inserted module 'br_netfilter' Dec 12 17:39:54.172514 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:39:54.180498 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:39:54.192912 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:39:54.200472 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:39:54.211621 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:39:54.232918 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:39:54.246144 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:39:54.264748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:39:54.274154 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:39:54.286093 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:39:54.291595 systemd-tmpfiles[248]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:39:54.293678 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:39:54.305875 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:39:54.318221 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:39:54.341030 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:39:54.348762 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:39:54.366508 dracut-cmdline[261]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:39:54.399623 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:39:54.425553 systemd-resolved[262]: Positive Trust Anchors: Dec 12 17:39:54.425564 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:39:54.449704 kernel: SCSI subsystem initialized Dec 12 17:39:54.425583 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:39:54.479942 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:39:54.430565 systemd-resolved[262]: Defaulting to hostname 'linux'. Dec 12 17:39:54.431176 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:39:54.441737 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:39:54.495242 kernel: iscsi: registered transport (tcp) Dec 12 17:39:54.507402 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:39:54.507411 kernel: QLogic iSCSI HBA Driver Dec 12 17:39:54.519679 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:39:54.534667 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:39:54.539806 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:39:54.587737 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:39:54.594589 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:39:54.668496 kernel: raid6: neonx8 gen() 18543 MB/s Dec 12 17:39:54.686486 kernel: raid6: neonx4 gen() 18547 MB/s Dec 12 17:39:54.705489 kernel: raid6: neonx2 gen() 17069 MB/s Dec 12 17:39:54.725487 kernel: raid6: neonx1 gen() 15036 MB/s Dec 12 17:39:54.744487 kernel: raid6: int64x8 gen() 10545 MB/s Dec 12 17:39:54.763485 kernel: raid6: int64x4 gen() 10609 MB/s Dec 12 17:39:54.785500 kernel: raid6: int64x2 gen() 8963 MB/s Dec 12 17:39:54.805408 kernel: raid6: int64x1 gen() 6886 MB/s Dec 12 17:39:54.805419 kernel: raid6: using algorithm neonx4 gen() 18547 MB/s Dec 12 17:39:54.828838 kernel: raid6: .... xor() 15145 MB/s, rmw enabled Dec 12 17:39:54.828845 kernel: raid6: using neon recovery algorithm Dec 12 17:39:54.839078 kernel: xor: measuring software checksum speed Dec 12 17:39:54.839088 kernel: 8regs : 28617 MB/sec Dec 12 17:39:54.842168 kernel: 32regs : 28814 MB/sec Dec 12 17:39:54.845122 kernel: arm64_neon : 37631 MB/sec Dec 12 17:39:54.848268 kernel: xor: using function: arm64_neon (37631 MB/sec) Dec 12 17:39:54.887509 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:39:54.892168 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:39:54.903594 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:39:54.928164 systemd-udevd[474]: Using default interface naming scheme 'v255'. Dec 12 17:39:54.933963 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:39:54.941246 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:39:54.974700 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Dec 12 17:39:54.996643 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:39:55.001694 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:39:55.053578 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:39:55.065174 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:39:55.132509 kernel: hv_vmbus: Vmbus version:5.3 Dec 12 17:39:55.134686 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:39:55.137715 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:39:55.158573 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:39:55.197043 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 12 17:39:55.197071 kernel: hv_vmbus: registering driver hv_netvsc Dec 12 17:39:55.197078 kernel: hv_vmbus: registering driver hv_storvsc Dec 12 17:39:55.197089 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 12 17:39:55.197101 kernel: hv_vmbus: registering driver hid_hyperv Dec 12 17:39:55.197107 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 12 17:39:55.197113 kernel: scsi host1: storvsc_host_t Dec 12 17:39:55.183878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:39:55.248033 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 12 17:39:55.248161 kernel: scsi host0: storvsc_host_t Dec 12 17:39:55.248336 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 12 17:39:55.248345 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 12 17:39:55.248361 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 12 17:39:55.248368 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 12 17:39:55.220705 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:39:55.263007 kernel: hv_netvsc 002248bf-a1bc-0022-48bf-a1bc002248bf eth0: VF slot 1 added Dec 12 17:39:55.263141 kernel: PTP clock support registered Dec 12 17:39:55.263205 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:39:55.263341 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:39:55.281883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:39:55.316124 kernel: hv_utils: Registering HyperV Utility Driver Dec 12 17:39:55.316152 kernel: hv_vmbus: registering driver hv_pci Dec 12 17:39:55.316161 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 12 17:39:55.316291 kernel: hv_vmbus: registering driver hv_utils Dec 12 17:39:55.325970 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 12 17:39:55.326093 kernel: hv_utils: Heartbeat IC version 3.0 Dec 12 17:39:55.326102 kernel: hv_utils: Shutdown IC version 3.2 Dec 12 17:39:55.335650 kernel: hv_pci ff48c758-9415-4281-8dc4-699d44586208: PCI VMBus probing: Using version 0x10004 Dec 12 17:39:55.340452 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 12 17:39:55.340585 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 12 17:39:55.340651 kernel: hv_utils: TimeSync IC version 4.0 Dec 12 17:39:55.349325 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 12 17:39:55.192267 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#133 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 12 17:39:55.201231 systemd-journald[225]: Time jumped backwards, rotating. Dec 12 17:39:55.201266 kernel: hv_pci ff48c758-9415-4281-8dc4-699d44586208: PCI host bridge to bus 9415:00 Dec 12 17:39:55.201353 kernel: pci_bus 9415:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 12 17:39:55.201424 kernel: pci_bus 9415:00: No busn resource found for root bus, will use [bus 00-ff] Dec 12 17:39:55.192210 systemd-resolved[262]: Clock change detected. Flushing caches. Dec 12 17:39:55.239335 kernel: pci 9415:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Dec 12 17:39:55.239372 kernel: pci 9415:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 12 17:39:55.239389 kernel: pci 9415:00:02.0: enabling Extended Tags Dec 12 17:39:55.239398 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#140 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 12 17:39:55.262228 kernel: pci 9415:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9415:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Dec 12 17:39:55.262472 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:39:55.283954 kernel: pci_bus 9415:00: busn_res: [bus 00-ff] end is updated to 00 Dec 12 17:39:55.284063 kernel: pci 9415:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Dec 12 17:39:55.284157 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 17:39:55.290169 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 12 17:39:55.298732 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 12 17:39:55.298858 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 12 17:39:55.302189 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 12 17:39:55.327182 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#185 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 12 17:39:55.348202 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#93 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 12 17:39:55.364106 kernel: mlx5_core 9415:00:02.0: enabling device (0000 -> 0002) Dec 12 17:39:55.373236 kernel: mlx5_core 9415:00:02.0: PTM is not supported by PCIe Dec 12 17:39:55.373327 kernel: mlx5_core 9415:00:02.0: firmware version: 16.30.5006 Dec 12 17:39:55.541343 kernel: hv_netvsc 002248bf-a1bc-0022-48bf-a1bc002248bf eth0: VF registering: eth1 Dec 12 17:39:55.541521 kernel: mlx5_core 9415:00:02.0 eth1: joined to eth0 Dec 12 17:39:55.547752 kernel: mlx5_core 9415:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 12 17:39:55.557312 kernel: mlx5_core 9415:00:02.0 enP37909s1: renamed from eth1 Dec 12 17:39:56.235717 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 12 17:39:56.400221 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 12 17:39:56.485031 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 12 17:39:56.577496 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 12 17:39:56.584584 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 12 17:39:56.596545 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:39:56.609252 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:39:56.619020 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:39:56.629742 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:39:56.639388 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:39:56.650833 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:39:56.678191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#181 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 12 17:39:56.678251 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:39:56.694214 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 17:39:57.705660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#99 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 12 17:39:57.720913 disk-uuid[665]: The operation has completed successfully. Dec 12 17:39:57.725836 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 12 17:39:57.794505 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:39:57.794596 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:39:57.817896 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 17:39:57.837253 sh[826]: Success Dec 12 17:39:57.904620 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:39:57.904658 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:39:57.913052 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:39:57.920206 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:39:58.469242 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:39:58.477852 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 17:39:58.492165 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 17:39:58.520182 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (844) Dec 12 17:39:58.520206 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 12 17:39:58.524436 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:39:59.162716 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:39:59.162794 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:39:59.242082 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 17:39:59.246124 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:39:59.254102 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:39:59.254698 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:39:59.279547 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:39:59.310214 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (871) Dec 12 17:39:59.321290 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:39:59.321329 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:39:59.367316 kernel: BTRFS info (device sda6): turning on async discard Dec 12 17:39:59.367358 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 17:39:59.375059 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:39:59.386318 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:39:59.386499 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:39:59.391670 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:39:59.414139 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:39:59.437248 systemd-networkd[1011]: lo: Link UP Dec 12 17:39:59.437257 systemd-networkd[1011]: lo: Gained carrier Dec 12 17:39:59.437929 systemd-networkd[1011]: Enumeration completed Dec 12 17:39:59.437999 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:39:59.444947 systemd-networkd[1011]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:39:59.444951 systemd-networkd[1011]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:39:59.445402 systemd[1]: Reached target network.target - Network. Dec 12 17:39:59.519192 kernel: mlx5_core 9415:00:02.0 enP37909s1: Link up Dec 12 17:39:59.548846 systemd-networkd[1011]: enP37909s1: Link UP Dec 12 17:39:59.552151 kernel: hv_netvsc 002248bf-a1bc-0022-48bf-a1bc002248bf eth0: Data path switched to VF: enP37909s1 Dec 12 17:39:59.548906 systemd-networkd[1011]: eth0: Link UP Dec 12 17:39:59.548970 systemd-networkd[1011]: eth0: Gained carrier Dec 12 17:39:59.548979 systemd-networkd[1011]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:39:59.568321 systemd-networkd[1011]: enP37909s1: Gained carrier Dec 12 17:39:59.578208 systemd-networkd[1011]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 12 17:40:01.449306 systemd-networkd[1011]: eth0: Gained IPv6LL Dec 12 17:40:01.965112 ignition[1014]: Ignition 2.22.0 Dec 12 17:40:01.965205 ignition[1014]: Stage: fetch-offline Dec 12 17:40:01.969501 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:40:01.965316 ignition[1014]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:40:01.978142 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 17:40:01.965322 ignition[1014]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:40:01.965453 ignition[1014]: parsed url from cmdline: "" Dec 12 17:40:01.965456 ignition[1014]: no config URL provided Dec 12 17:40:01.965460 ignition[1014]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:40:01.965467 ignition[1014]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:40:01.965471 ignition[1014]: failed to fetch config: resource requires networking Dec 12 17:40:01.965595 ignition[1014]: Ignition finished successfully Dec 12 17:40:02.014268 ignition[1025]: Ignition 2.22.0 Dec 12 17:40:02.014273 ignition[1025]: Stage: fetch Dec 12 17:40:02.014490 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:40:02.014497 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:40:02.014561 ignition[1025]: parsed url from cmdline: "" Dec 12 17:40:02.014564 ignition[1025]: no config URL provided Dec 12 17:40:02.014567 ignition[1025]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:40:02.014574 ignition[1025]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:40:02.014588 ignition[1025]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 12 17:40:02.120527 ignition[1025]: GET result: OK Dec 12 17:40:02.123063 ignition[1025]: config has been read from IMDS userdata Dec 12 17:40:02.123087 ignition[1025]: parsing config with SHA512: fc133b240fa70a65c19c43ae1d0f0fab1b8a5f225def89184bee68f2e39f86317f4ae051f6af70babcf089a76f0ec07c99e2d285223f0854f745ad70216820f1 Dec 12 17:40:02.126115 unknown[1025]: fetched base config from "system" Dec 12 17:40:02.126426 ignition[1025]: fetch: fetch complete Dec 12 17:40:02.126120 unknown[1025]: fetched base config from "system" Dec 12 17:40:02.126429 ignition[1025]: fetch: fetch passed Dec 12 17:40:02.126123 unknown[1025]: fetched user config from "azure" Dec 12 17:40:02.126469 ignition[1025]: Ignition finished successfully Dec 12 17:40:02.128663 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 17:40:02.137200 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:40:02.178552 ignition[1031]: Ignition 2.22.0 Dec 12 17:40:02.178567 ignition[1031]: Stage: kargs Dec 12 17:40:02.178708 ignition[1031]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:40:02.183795 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:40:02.178714 ignition[1031]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:40:02.189639 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:40:02.179223 ignition[1031]: kargs: kargs passed Dec 12 17:40:02.179263 ignition[1031]: Ignition finished successfully Dec 12 17:40:02.223541 ignition[1037]: Ignition 2.22.0 Dec 12 17:40:02.223555 ignition[1037]: Stage: disks Dec 12 17:40:02.223694 ignition[1037]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:40:02.227339 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:40:02.223700 ignition[1037]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:40:02.233435 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:40:02.224171 ignition[1037]: disks: disks passed Dec 12 17:40:02.241789 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:40:02.224239 ignition[1037]: Ignition finished successfully Dec 12 17:40:02.251633 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:40:02.260428 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:40:02.269564 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:40:02.277157 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:40:02.424007 systemd-fsck[1046]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Dec 12 17:40:02.432189 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:40:02.441144 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:40:02.877202 kernel: EXT4-fs (sda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 12 17:40:02.876841 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:40:02.880782 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:40:02.921888 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:40:02.935602 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:40:02.943551 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 12 17:40:02.949430 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:40:02.949453 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:40:02.961919 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:40:03.007826 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1061) Dec 12 17:40:03.007846 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:40:03.007853 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:40:02.976229 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:40:03.023533 kernel: BTRFS info (device sda6): turning on async discard Dec 12 17:40:03.023559 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 17:40:03.024615 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:40:03.919837 coreos-metadata[1063]: Dec 12 17:40:03.919 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 12 17:40:03.926457 coreos-metadata[1063]: Dec 12 17:40:03.926 INFO Fetch successful Dec 12 17:40:03.926457 coreos-metadata[1063]: Dec 12 17:40:03.926 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 12 17:40:03.939136 coreos-metadata[1063]: Dec 12 17:40:03.935 INFO Fetch successful Dec 12 17:40:03.969392 coreos-metadata[1063]: Dec 12 17:40:03.969 INFO wrote hostname ci-4459.2.2-a-c1c6b7e9cf to /sysroot/etc/hostname Dec 12 17:40:03.976631 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 17:40:04.529820 initrd-setup-root[1091]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:40:04.602979 initrd-setup-root[1098]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:40:04.643545 initrd-setup-root[1105]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:40:04.649883 initrd-setup-root[1112]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:40:06.426644 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:40:06.433637 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:40:06.450660 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:40:06.462704 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:40:06.470617 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:40:06.491967 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:40:06.502164 ignition[1181]: INFO : Ignition 2.22.0 Dec 12 17:40:06.502164 ignition[1181]: INFO : Stage: mount Dec 12 17:40:06.502164 ignition[1181]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:40:06.502164 ignition[1181]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:40:06.502164 ignition[1181]: INFO : mount: mount passed Dec 12 17:40:06.502164 ignition[1181]: INFO : Ignition finished successfully Dec 12 17:40:06.502410 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:40:06.511112 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:40:06.533272 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:40:06.568755 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1193) Dec 12 17:40:06.568787 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:40:06.573586 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:40:06.583639 kernel: BTRFS info (device sda6): turning on async discard Dec 12 17:40:06.583679 kernel: BTRFS info (device sda6): enabling free space tree Dec 12 17:40:06.584918 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:40:06.611593 ignition[1211]: INFO : Ignition 2.22.0 Dec 12 17:40:06.611593 ignition[1211]: INFO : Stage: files Dec 12 17:40:06.617713 ignition[1211]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:40:06.617713 ignition[1211]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:40:06.617713 ignition[1211]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:40:06.645597 ignition[1211]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:40:06.645597 ignition[1211]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:40:06.722961 ignition[1211]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:40:06.729377 ignition[1211]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:40:06.735402 ignition[1211]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:40:06.729482 unknown[1211]: wrote ssh authorized keys file for user: core Dec 12 17:40:06.917274 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:40:06.926339 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:40:06.946484 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:40:07.002364 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:40:07.002364 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:40:07.019437 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:40:07.019437 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:40:07.019437 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:40:07.019437 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:40:07.019437 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:40:07.019437 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:40:07.019437 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:40:07.074114 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:40:07.074114 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:40:07.074114 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:40:07.074114 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:40:07.074114 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:40:07.074114 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 12 17:40:07.651559 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:40:07.910844 ignition[1211]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:40:07.910844 ignition[1211]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:40:07.978207 ignition[1211]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:40:07.987431 ignition[1211]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:40:07.987431 ignition[1211]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:40:07.987431 ignition[1211]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:40:07.987431 ignition[1211]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:40:07.987431 ignition[1211]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:40:07.987431 ignition[1211]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:40:07.987431 ignition[1211]: INFO : files: files passed Dec 12 17:40:07.987431 ignition[1211]: INFO : Ignition finished successfully Dec 12 17:40:07.987650 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:40:08.000968 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:40:08.038627 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:40:08.054589 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:40:08.054657 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:40:08.088265 initrd-setup-root-after-ignition[1239]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:40:08.088265 initrd-setup-root-after-ignition[1239]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:40:08.100789 initrd-setup-root-after-ignition[1243]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:40:08.095020 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:40:08.106635 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:40:08.118196 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:40:08.157564 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:40:08.157657 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:40:08.167547 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:40:08.177021 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:40:08.185383 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:40:08.185895 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:40:08.223231 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:40:08.229701 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:40:08.255180 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:40:08.260476 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:40:08.269800 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:40:08.278324 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:40:08.278410 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:40:08.291052 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:40:08.295411 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:40:08.303900 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:40:08.312881 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:40:08.321442 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:40:08.330571 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:40:08.339818 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:40:08.348493 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:40:08.358454 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:40:08.366736 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:40:08.375958 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:40:08.383556 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:40:08.383653 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:40:08.394993 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:40:08.402785 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:40:08.411943 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:40:08.416347 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:40:08.421790 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:40:08.421874 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:40:08.435184 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:40:08.435277 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:40:08.441288 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:40:08.441357 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:40:08.452620 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 12 17:40:08.452684 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 12 17:40:08.467319 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:40:08.476336 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:40:08.489790 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:40:08.549266 ignition[1263]: INFO : Ignition 2.22.0 Dec 12 17:40:08.549266 ignition[1263]: INFO : Stage: umount Dec 12 17:40:08.549266 ignition[1263]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:40:08.549266 ignition[1263]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 12 17:40:08.549266 ignition[1263]: INFO : umount: umount passed Dec 12 17:40:08.549266 ignition[1263]: INFO : Ignition finished successfully Dec 12 17:40:08.489898 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:40:08.499384 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:40:08.499456 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:40:08.528369 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:40:08.528434 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:40:08.539855 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:40:08.539918 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:40:08.549986 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:40:08.550061 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:40:08.557615 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:40:08.557658 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:40:08.564673 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 17:40:08.564707 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 17:40:08.573313 systemd[1]: Stopped target network.target - Network. Dec 12 17:40:08.582140 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:40:08.582178 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:40:08.590448 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:40:08.597738 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:40:08.597947 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:40:08.611729 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:40:08.620181 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:40:08.628289 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:40:08.628322 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:40:08.637587 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:40:08.637612 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:40:08.645923 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:40:08.645965 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:40:08.653522 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:40:08.653551 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:40:08.661989 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:40:08.670072 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:40:08.679361 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:40:08.679761 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:40:08.679837 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:40:08.687120 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:40:08.687224 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:40:08.697227 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:40:08.697319 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:40:08.878521 kernel: hv_netvsc 002248bf-a1bc-0022-48bf-a1bc002248bf eth0: Data path switched from VF: enP37909s1 Dec 12 17:40:08.709890 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 17:40:08.710079 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:40:08.710194 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:40:08.723747 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 17:40:08.724557 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:40:08.732224 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:40:08.732260 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:40:08.741282 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:40:08.757849 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:40:08.757912 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:40:08.766387 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:40:08.766432 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:40:08.777432 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:40:08.777467 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:40:08.781819 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:40:08.781848 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:40:08.794002 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:40:08.798985 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:40:08.799032 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:40:08.822668 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:40:08.822822 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:40:08.830782 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:40:08.830812 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:40:08.838870 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:40:08.838891 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:40:08.846479 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:40:08.846514 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:40:08.857219 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:40:08.857257 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:40:08.869578 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:40:08.869616 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:40:08.888323 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:40:08.906901 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:40:08.906954 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:40:08.923872 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:40:08.923911 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:40:08.933742 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 12 17:40:08.933778 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:40:08.942847 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:40:08.942876 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:40:08.949031 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:40:08.949061 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:40:08.964305 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 17:40:08.964343 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 12 17:40:08.964364 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 17:40:08.964386 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:40:09.184056 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Dec 12 17:40:08.964600 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:40:08.964676 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:40:08.973046 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:40:08.973103 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:40:08.983099 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:40:08.991975 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:40:09.036588 systemd[1]: Switching root. Dec 12 17:40:09.211565 systemd-journald[225]: Journal stopped Dec 12 17:40:17.062030 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:40:17.062049 kernel: SELinux: policy capability open_perms=1 Dec 12 17:40:17.062056 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:40:17.062062 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:40:17.062067 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:40:17.062073 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:40:17.062079 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:40:17.062085 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:40:17.062090 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:40:17.062095 kernel: audit: type=1403 audit(1765561211.431:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:40:17.062102 systemd[1]: Successfully loaded SELinux policy in 315.756ms. Dec 12 17:40:17.062109 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.335ms. Dec 12 17:40:17.062116 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:40:17.062124 systemd[1]: Detected virtualization microsoft. Dec 12 17:40:17.062130 systemd[1]: Detected architecture arm64. Dec 12 17:40:17.062136 systemd[1]: Detected first boot. Dec 12 17:40:17.062143 systemd[1]: Hostname set to . Dec 12 17:40:17.062149 systemd[1]: Initializing machine ID from random generator. Dec 12 17:40:17.062155 zram_generator::config[1307]: No configuration found. Dec 12 17:40:17.062161 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:40:17.062167 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:40:17.062186 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 17:40:17.062192 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:40:17.062199 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:40:17.062205 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:40:17.062211 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:40:17.062218 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:40:17.062224 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:40:17.062230 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:40:17.062236 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:40:17.062243 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:40:17.062250 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:40:17.062255 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:40:17.062262 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:40:17.062269 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:40:17.062275 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:40:17.062281 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:40:17.062287 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:40:17.062294 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:40:17.062300 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:40:17.062308 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:40:17.062314 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:40:17.062320 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:40:17.062326 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:40:17.062332 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:40:17.062338 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:40:17.062345 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:40:17.062351 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:40:17.062357 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:40:17.062364 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:40:17.062369 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:40:17.062376 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:40:17.062383 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:40:17.062389 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:40:17.062396 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:40:17.062402 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:40:17.062408 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:40:17.062415 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:40:17.062421 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:40:17.062428 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:40:17.062434 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:40:17.062440 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:40:17.062447 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:40:17.062453 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:40:17.062460 systemd[1]: Reached target machines.target - Containers. Dec 12 17:40:17.062466 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:40:17.062472 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:40:17.062479 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:40:17.062485 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:40:17.062491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:40:17.062498 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:40:17.062504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:40:17.062510 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:40:17.062516 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:40:17.062522 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:40:17.062529 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:40:17.062536 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:40:17.062542 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:40:17.062548 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:40:17.062555 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:40:17.062561 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:40:17.062567 kernel: fuse: init (API version 7.41) Dec 12 17:40:17.062573 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:40:17.062580 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:40:17.062587 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:40:17.062605 systemd-journald[1387]: Collecting audit messages is disabled. Dec 12 17:40:17.062619 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:40:17.062626 systemd-journald[1387]: Journal started Dec 12 17:40:17.062641 systemd-journald[1387]: Runtime Journal (/run/log/journal/32f128e13a5c4b02ad31bff89c78ef0a) is 8M, max 78.3M, 70.3M free. Dec 12 17:40:16.160518 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:40:16.167725 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 12 17:40:16.168111 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:40:16.168409 systemd[1]: systemd-journald.service: Consumed 2.603s CPU time. Dec 12 17:40:17.069945 kernel: loop: module loaded Dec 12 17:40:17.089419 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:40:17.098697 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 17:40:17.098728 systemd[1]: Stopped verity-setup.service. Dec 12 17:40:17.109184 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:40:17.114449 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:40:17.119202 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:40:17.124652 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:40:17.129266 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:40:17.134509 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:40:17.139729 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:40:17.148409 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:40:17.150195 kernel: ACPI: bus type drm_connector registered Dec 12 17:40:17.155129 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:40:17.161018 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:40:17.161148 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:40:17.166784 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:40:17.166903 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:40:17.172343 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:40:17.172458 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:40:17.177518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:40:17.177639 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:40:17.183593 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:40:17.183710 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:40:17.188925 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:40:17.189047 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:40:17.194968 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:40:17.200637 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:40:17.206381 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:40:17.211873 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:40:17.224673 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:40:17.232281 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:40:17.244256 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:40:17.249488 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:40:17.249515 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:40:17.255355 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:40:17.261437 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:40:17.266563 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:40:17.275849 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:40:17.281800 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:40:17.287364 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:40:17.289295 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:40:17.294539 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:40:17.295230 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:40:17.302291 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:40:17.314977 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:40:17.323324 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:40:17.328957 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:40:17.334944 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:40:17.342195 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:40:17.349052 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:40:17.354874 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:40:17.361575 systemd-journald[1387]: Time spent on flushing to /var/log/journal/32f128e13a5c4b02ad31bff89c78ef0a is 45.401ms for 941 entries. Dec 12 17:40:17.361575 systemd-journald[1387]: System Journal (/var/log/journal/32f128e13a5c4b02ad31bff89c78ef0a) is 11.8M, max 2.6G, 2.6G free. Dec 12 17:40:17.483441 systemd-journald[1387]: Received client request to flush runtime journal. Dec 12 17:40:17.483480 systemd-journald[1387]: /var/log/journal/32f128e13a5c4b02ad31bff89c78ef0a/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Dec 12 17:40:17.483496 systemd-journald[1387]: Rotating system journal. Dec 12 17:40:17.483512 kernel: loop0: detected capacity change from 0 to 211168 Dec 12 17:40:17.465279 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:40:17.484442 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:40:17.484969 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:40:17.491453 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:40:17.543161 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Dec 12 17:40:17.543170 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Dec 12 17:40:17.545205 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:40:17.545633 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:40:17.552525 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:40:17.617212 kernel: loop1: detected capacity change from 0 to 119840 Dec 12 17:40:17.779587 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:40:17.785642 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:40:17.800519 systemd-tmpfiles[1466]: ACLs are not supported, ignoring. Dec 12 17:40:17.800533 systemd-tmpfiles[1466]: ACLs are not supported, ignoring. Dec 12 17:40:17.803279 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:40:18.272940 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:40:18.280085 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:40:18.308590 systemd-udevd[1470]: Using default interface naming scheme 'v255'. Dec 12 17:40:18.459195 kernel: loop2: detected capacity change from 0 to 100632 Dec 12 17:40:18.690874 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:40:18.702847 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:40:18.732285 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:40:18.852193 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 17:40:18.852265 kernel: hv_vmbus: registering driver hv_balloon Dec 12 17:40:18.853329 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:40:18.864760 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 12 17:40:18.864820 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 12 17:40:18.900187 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#142 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 12 17:40:18.929202 kernel: hv_vmbus: registering driver hyperv_fb Dec 12 17:40:18.941110 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 12 17:40:18.941166 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 12 17:40:18.939297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:40:18.949892 kernel: Console: switching to colour dummy device 80x25 Dec 12 17:40:18.952322 kernel: Console: switching to colour frame buffer device 128x48 Dec 12 17:40:18.956670 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:40:19.055603 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:40:19.055802 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:40:19.062377 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:40:19.063260 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:40:19.139375 systemd-networkd[1491]: lo: Link UP Dec 12 17:40:19.139386 systemd-networkd[1491]: lo: Gained carrier Dec 12 17:40:19.140304 systemd-networkd[1491]: Enumeration completed Dec 12 17:40:19.140424 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:40:19.140523 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:40:19.140531 systemd-networkd[1491]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:40:19.147321 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:40:19.154980 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:40:19.165193 kernel: MACsec IEEE 802.1AE Dec 12 17:40:19.205193 kernel: mlx5_core 9415:00:02.0 enP37909s1: Link up Dec 12 17:40:19.227261 kernel: hv_netvsc 002248bf-a1bc-0022-48bf-a1bc002248bf eth0: Data path switched to VF: enP37909s1 Dec 12 17:40:19.227312 systemd-networkd[1491]: enP37909s1: Link UP Dec 12 17:40:19.227957 systemd-networkd[1491]: eth0: Link UP Dec 12 17:40:19.227965 systemd-networkd[1491]: eth0: Gained carrier Dec 12 17:40:19.227978 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:40:19.232546 systemd-networkd[1491]: enP37909s1: Gained carrier Dec 12 17:40:19.238382 systemd-networkd[1491]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 12 17:40:19.278805 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:40:19.295474 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 12 17:40:19.308495 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:40:19.380225 kernel: loop3: detected capacity change from 0 to 27936 Dec 12 17:40:19.398361 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:40:19.867261 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:40:20.194196 kernel: loop4: detected capacity change from 0 to 211168 Dec 12 17:40:20.216259 kernel: loop5: detected capacity change from 0 to 119840 Dec 12 17:40:20.232191 kernel: loop6: detected capacity change from 0 to 100632 Dec 12 17:40:20.245192 kernel: loop7: detected capacity change from 0 to 27936 Dec 12 17:40:20.259092 (sd-merge)[1619]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 12 17:40:20.259481 (sd-merge)[1619]: Merged extensions into '/usr'. Dec 12 17:40:20.262486 systemd[1]: Reload requested from client PID 1445 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:40:20.262582 systemd[1]: Reloading... Dec 12 17:40:20.313197 zram_generator::config[1648]: No configuration found. Dec 12 17:40:20.503837 systemd[1]: Reloading finished in 240 ms. Dec 12 17:40:20.519200 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:40:20.535021 systemd[1]: Starting ensure-sysext.service... Dec 12 17:40:20.541683 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:40:20.554436 systemd[1]: Reload requested from client PID 1703 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:40:20.554526 systemd[1]: Reloading... Dec 12 17:40:20.605243 zram_generator::config[1741]: No configuration found. Dec 12 17:40:20.624529 systemd-tmpfiles[1704]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:40:20.624554 systemd-tmpfiles[1704]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:40:20.625287 systemd-tmpfiles[1704]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:40:20.625429 systemd-tmpfiles[1704]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 17:40:20.625843 systemd-tmpfiles[1704]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 17:40:20.625978 systemd-tmpfiles[1704]: ACLs are not supported, ignoring. Dec 12 17:40:20.626006 systemd-tmpfiles[1704]: ACLs are not supported, ignoring. Dec 12 17:40:20.664263 systemd-tmpfiles[1704]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:40:20.664379 systemd-tmpfiles[1704]: Skipping /boot Dec 12 17:40:20.669321 systemd-tmpfiles[1704]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:40:20.669408 systemd-tmpfiles[1704]: Skipping /boot Dec 12 17:40:20.713295 systemd-networkd[1491]: eth0: Gained IPv6LL Dec 12 17:40:20.751128 systemd[1]: Reloading finished in 196 ms. Dec 12 17:40:20.766186 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:40:20.780012 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:40:20.792872 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:40:20.800892 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:40:20.807346 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:40:20.815624 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:40:20.822733 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:40:20.835655 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:40:20.836720 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:40:20.851118 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:40:20.858790 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:40:20.868417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:40:20.875567 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:40:20.875663 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:40:20.875770 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:40:20.885404 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:40:20.885545 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:40:20.891391 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:40:20.891513 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:40:20.896404 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:40:20.896508 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:40:20.900972 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:40:20.901086 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:40:20.909947 systemd[1]: Finished ensure-sysext.service. Dec 12 17:40:20.918218 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:40:20.924456 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:40:20.924506 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:40:20.959201 systemd-resolved[1796]: Positive Trust Anchors: Dec 12 17:40:20.959709 systemd-resolved[1796]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:40:20.959733 systemd-resolved[1796]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:40:20.963349 systemd-resolved[1796]: Using system hostname 'ci-4459.2.2-a-c1c6b7e9cf'. Dec 12 17:40:20.964596 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:40:20.970529 systemd[1]: Reached target network.target - Network. Dec 12 17:40:20.974576 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:40:20.979318 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:40:21.002186 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:40:21.159042 augenrules[1829]: No rules Dec 12 17:40:21.160094 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:40:21.160293 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:40:22.406283 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:40:22.411744 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:40:28.591201 ldconfig[1440]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:40:28.605716 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:40:28.612116 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:40:28.628956 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:40:28.634348 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:40:28.638960 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:40:28.644163 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:40:28.649580 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:40:28.654045 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:40:28.659229 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:40:28.664129 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:40:28.664160 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:40:28.667863 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:40:28.708116 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:40:28.714565 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:40:28.719741 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:40:28.725139 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:40:28.730738 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:40:28.736761 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:40:28.741257 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:40:28.746832 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:40:28.751417 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:40:28.755432 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:40:28.759441 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:40:28.759464 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:40:28.761203 systemd[1]: Starting chronyd.service - NTP client/server... Dec 12 17:40:28.773263 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:40:28.780686 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 17:40:28.785739 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:40:28.793793 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:40:28.802064 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:40:28.816319 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:40:28.820840 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:40:28.822301 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 12 17:40:28.827964 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 12 17:40:28.828745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:40:28.835287 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:40:28.840426 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:40:28.846280 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:40:28.853294 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:40:28.853564 jq[1849]: false Dec 12 17:40:28.859285 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:40:28.866323 extend-filesystems[1850]: Found /dev/sda6 Dec 12 17:40:28.872742 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:40:28.878030 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:40:28.879457 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:40:28.880091 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:40:28.888306 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:40:28.894870 KVP[1851]: KVP starting; pid is:1851 Dec 12 17:40:28.898487 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:40:28.900627 chronyd[1841]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 12 17:40:28.903978 jq[1870]: true Dec 12 17:40:28.904342 KVP[1851]: KVP LIC Version: 3.1 Dec 12 17:40:28.907186 kernel: hv_utils: KVP IC version 4.0 Dec 12 17:40:28.908457 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:40:28.908609 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:40:28.911297 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:40:28.911431 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:40:28.920290 extend-filesystems[1850]: Found /dev/sda9 Dec 12 17:40:28.929503 extend-filesystems[1850]: Checking size of /dev/sda9 Dec 12 17:40:28.942520 jq[1877]: true Dec 12 17:40:28.964682 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:40:28.976236 chronyd[1841]: Timezone right/UTC failed leap second check, ignoring Dec 12 17:40:28.976361 chronyd[1841]: Loaded seccomp filter (level 2) Dec 12 17:40:28.976504 systemd[1]: Started chronyd.service - NTP client/server. Dec 12 17:40:29.012639 systemd-logind[1864]: New seat seat0. Dec 12 17:40:29.013260 systemd-logind[1864]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 12 17:40:29.013396 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:40:29.020235 update_engine[1868]: I20251212 17:40:29.020150 1868 main.cc:92] Flatcar Update Engine starting Dec 12 17:40:29.030351 extend-filesystems[1850]: Old size kept for /dev/sda9 Dec 12 17:40:29.030138 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:40:29.030350 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:40:29.070352 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:40:29.074385 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:40:29.080472 (ntainerd)[1929]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 17:40:29.083039 tar[1875]: linux-arm64/LICENSE Dec 12 17:40:29.083281 tar[1875]: linux-arm64/helm Dec 12 17:40:29.084383 sshd_keygen[1867]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:40:29.095750 bash[1903]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:40:29.097672 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:40:29.105874 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:40:29.129500 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:40:29.139366 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:40:29.148309 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 12 17:40:29.186127 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:40:29.186300 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:40:29.197338 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:40:29.224460 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 12 17:40:29.246586 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:40:29.264648 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:40:29.270325 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:40:29.277310 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:40:29.366170 dbus-daemon[1844]: [system] SELinux support is enabled Dec 12 17:40:29.367258 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:40:29.377902 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:40:29.378162 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:40:29.378881 dbus-daemon[1844]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 12 17:40:29.385540 update_engine[1868]: I20251212 17:40:29.385399 1868 update_check_scheduler.cc:74] Next update check in 7m34s Dec 12 17:40:29.387738 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:40:29.387764 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:40:29.394335 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:40:29.402372 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:40:29.459224 tar[1875]: linux-arm64/README.md Dec 12 17:40:29.473233 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:40:29.480707 coreos-metadata[1843]: Dec 12 17:40:29.480 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 12 17:40:29.483699 coreos-metadata[1843]: Dec 12 17:40:29.483 INFO Fetch successful Dec 12 17:40:29.483847 coreos-metadata[1843]: Dec 12 17:40:29.483 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 12 17:40:29.488049 coreos-metadata[1843]: Dec 12 17:40:29.488 INFO Fetch successful Dec 12 17:40:29.488559 coreos-metadata[1843]: Dec 12 17:40:29.488 INFO Fetching http://168.63.129.16/machine/99409c4c-ce26-4d59-92b0-75e17326dffb/98c053f4%2Dd082%2D4564%2D900f%2Da44f01da9de9.%5Fci%2D4459.2.2%2Da%2Dc1c6b7e9cf?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 12 17:40:29.490000 coreos-metadata[1843]: Dec 12 17:40:29.489 INFO Fetch successful Dec 12 17:40:29.490118 coreos-metadata[1843]: Dec 12 17:40:29.490 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 12 17:40:29.499869 coreos-metadata[1843]: Dec 12 17:40:29.499 INFO Fetch successful Dec 12 17:40:29.528891 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 17:40:29.534152 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:40:29.634635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:40:29.640272 (kubelet)[2025]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:40:29.699492 locksmithd[2011]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:40:29.972014 kubelet[2025]: E1212 17:40:29.971966 2025 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:40:29.974189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:40:29.974295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:40:29.974560 systemd[1]: kubelet.service: Consumed 543ms CPU time, 258.4M memory peak. Dec 12 17:40:30.566202 containerd[1929]: time="2025-12-12T17:40:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:40:30.567044 containerd[1929]: time="2025-12-12T17:40:30.566851332Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 17:40:30.575202 containerd[1929]: time="2025-12-12T17:40:30.574547212Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.496µs" Dec 12 17:40:30.575202 containerd[1929]: time="2025-12-12T17:40:30.574573452Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:40:30.575202 containerd[1929]: time="2025-12-12T17:40:30.574587068Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:40:30.575202 containerd[1929]: time="2025-12-12T17:40:30.574706860Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:40:30.575202 containerd[1929]: time="2025-12-12T17:40:30.574718388Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:40:30.575202 containerd[1929]: time="2025-12-12T17:40:30.574734308Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:40:30.575202 containerd[1929]: time="2025-12-12T17:40:30.574774044Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:40:30.575202 containerd[1929]: time="2025-12-12T17:40:30.574780956Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:40:30.575202 containerd[1929]: time="2025-12-12T17:40:30.574930396Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:40:30.575202 containerd[1929]: time="2025-12-12T17:40:30.574942540Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:40:30.575202 containerd[1929]: time="2025-12-12T17:40:30.574949708Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:40:30.575202 containerd[1929]: time="2025-12-12T17:40:30.574955556Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:40:30.575395 containerd[1929]: time="2025-12-12T17:40:30.575016412Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:40:30.575395 containerd[1929]: time="2025-12-12T17:40:30.575158700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:40:30.575460 containerd[1929]: time="2025-12-12T17:40:30.575442828Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:40:30.575526 containerd[1929]: time="2025-12-12T17:40:30.575512900Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:40:30.575601 containerd[1929]: time="2025-12-12T17:40:30.575590428Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:40:30.575844 containerd[1929]: time="2025-12-12T17:40:30.575827940Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:40:30.575970 containerd[1929]: time="2025-12-12T17:40:30.575955468Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:40:30.591732 containerd[1929]: time="2025-12-12T17:40:30.591711524Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:40:30.591850 containerd[1929]: time="2025-12-12T17:40:30.591835420Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:40:30.592017 containerd[1929]: time="2025-12-12T17:40:30.591991988Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:40:30.592017 containerd[1929]: time="2025-12-12T17:40:30.592017828Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:40:30.592075 containerd[1929]: time="2025-12-12T17:40:30.592027700Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:40:30.592075 containerd[1929]: time="2025-12-12T17:40:30.592034740Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:40:30.592075 containerd[1929]: time="2025-12-12T17:40:30.592042948Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:40:30.592075 containerd[1929]: time="2025-12-12T17:40:30.592054092Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:40:30.592075 containerd[1929]: time="2025-12-12T17:40:30.592061412Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:40:30.592075 containerd[1929]: time="2025-12-12T17:40:30.592067652Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:40:30.592075 containerd[1929]: time="2025-12-12T17:40:30.592076660Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:40:30.592153 containerd[1929]: time="2025-12-12T17:40:30.592084988Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:40:30.592222 containerd[1929]: time="2025-12-12T17:40:30.592205108Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:40:30.592301 containerd[1929]: time="2025-12-12T17:40:30.592224988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:40:30.592301 containerd[1929]: time="2025-12-12T17:40:30.592235116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:40:30.592301 containerd[1929]: time="2025-12-12T17:40:30.592242028Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:40:30.592301 containerd[1929]: time="2025-12-12T17:40:30.592252828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:40:30.592301 containerd[1929]: time="2025-12-12T17:40:30.592259748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:40:30.592301 containerd[1929]: time="2025-12-12T17:40:30.592267212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:40:30.592301 containerd[1929]: time="2025-12-12T17:40:30.592273364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:40:30.592301 containerd[1929]: time="2025-12-12T17:40:30.592287044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:40:30.592301 containerd[1929]: time="2025-12-12T17:40:30.592293740Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:40:30.592301 containerd[1929]: time="2025-12-12T17:40:30.592300036Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:40:30.592539 containerd[1929]: time="2025-12-12T17:40:30.592340708Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:40:30.592539 containerd[1929]: time="2025-12-12T17:40:30.592351724Z" level=info msg="Start snapshots syncer" Dec 12 17:40:30.592539 containerd[1929]: time="2025-12-12T17:40:30.592369340Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:40:30.592591 containerd[1929]: time="2025-12-12T17:40:30.592527820Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:40:30.592591 containerd[1929]: time="2025-12-12T17:40:30.592565172Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:40:30.592742 containerd[1929]: time="2025-12-12T17:40:30.592597692Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:40:30.592742 containerd[1929]: time="2025-12-12T17:40:30.592688668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:40:30.592742 containerd[1929]: time="2025-12-12T17:40:30.592703644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:40:30.592742 containerd[1929]: time="2025-12-12T17:40:30.592714228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:40:30.592742 containerd[1929]: time="2025-12-12T17:40:30.592720180Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:40:30.592742 containerd[1929]: time="2025-12-12T17:40:30.592727388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:40:30.592742 containerd[1929]: time="2025-12-12T17:40:30.592733684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:40:30.592742 containerd[1929]: time="2025-12-12T17:40:30.592740436Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592758140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592768452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592775516Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592799948Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592808596Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592813860Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592822076Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592826556Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592831900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592837932Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592851452Z" level=info msg="runtime interface created" Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592855500Z" level=info msg="created NRI interface" Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592862508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592870388Z" level=info msg="Connect containerd service" Dec 12 17:40:30.592915 containerd[1929]: time="2025-12-12T17:40:30.592886124Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:40:30.593515 containerd[1929]: time="2025-12-12T17:40:30.593486932Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:40:31.178492 containerd[1929]: time="2025-12-12T17:40:31.178418644Z" level=info msg="Start subscribing containerd event" Dec 12 17:40:31.178790 containerd[1929]: time="2025-12-12T17:40:31.178479260Z" level=info msg="Start recovering state" Dec 12 17:40:31.178790 containerd[1929]: time="2025-12-12T17:40:31.178570436Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:40:31.178790 containerd[1929]: time="2025-12-12T17:40:31.178734580Z" level=info msg="Start event monitor" Dec 12 17:40:31.178790 containerd[1929]: time="2025-12-12T17:40:31.178748460Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:40:31.178790 containerd[1929]: time="2025-12-12T17:40:31.178757220Z" level=info msg="Start streaming server" Dec 12 17:40:31.178790 containerd[1929]: time="2025-12-12T17:40:31.178763668Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:40:31.178790 containerd[1929]: time="2025-12-12T17:40:31.178769196Z" level=info msg="runtime interface starting up..." Dec 12 17:40:31.178790 containerd[1929]: time="2025-12-12T17:40:31.178773652Z" level=info msg="starting plugins..." Dec 12 17:40:31.179079 containerd[1929]: time="2025-12-12T17:40:31.178928628Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:40:31.179079 containerd[1929]: time="2025-12-12T17:40:31.178764676Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:40:31.179079 containerd[1929]: time="2025-12-12T17:40:31.179063996Z" level=info msg="containerd successfully booted in 0.615020s" Dec 12 17:40:31.179164 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:40:31.185761 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:40:31.195237 systemd[1]: Startup finished in 1.730s (kernel) + 17.633s (initrd) + 20.078s (userspace) = 39.442s. Dec 12 17:40:31.693654 login[2007]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:31.694948 login[2008]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:31.700604 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:40:31.701482 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:40:31.707066 systemd-logind[1864]: New session 1 of user core. Dec 12 17:40:31.709958 systemd-logind[1864]: New session 2 of user core. Dec 12 17:40:31.717545 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:40:31.719619 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:40:31.726662 (systemd)[2061]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:40:31.728288 systemd-logind[1864]: New session c1 of user core. Dec 12 17:40:31.880755 systemd[2061]: Queued start job for default target default.target. Dec 12 17:40:31.888873 systemd[2061]: Created slice app.slice - User Application Slice. Dec 12 17:40:31.888894 systemd[2061]: Reached target paths.target - Paths. Dec 12 17:40:31.888987 systemd[2061]: Reached target timers.target - Timers. Dec 12 17:40:31.890079 systemd[2061]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:40:31.897209 systemd[2061]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:40:31.897335 systemd[2061]: Reached target sockets.target - Sockets. Dec 12 17:40:31.897412 systemd[2061]: Reached target basic.target - Basic System. Dec 12 17:40:31.897557 systemd[2061]: Reached target default.target - Main User Target. Dec 12 17:40:31.897638 systemd[2061]: Startup finished in 165ms. Dec 12 17:40:31.897729 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:40:31.903272 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:40:31.903750 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:40:32.542441 waagent[1999]: 2025-12-12T17:40:32.542364Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 12 17:40:32.547112 waagent[1999]: 2025-12-12T17:40:32.547070Z INFO Daemon Daemon OS: flatcar 4459.2.2 Dec 12 17:40:32.550800 waagent[1999]: 2025-12-12T17:40:32.550770Z INFO Daemon Daemon Python: 3.11.13 Dec 12 17:40:32.554526 waagent[1999]: 2025-12-12T17:40:32.554477Z INFO Daemon Daemon Run daemon Dec 12 17:40:32.557940 waagent[1999]: 2025-12-12T17:40:32.557857Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Dec 12 17:40:32.564857 waagent[1999]: 2025-12-12T17:40:32.564826Z INFO Daemon Daemon Using waagent for provisioning Dec 12 17:40:32.569408 waagent[1999]: 2025-12-12T17:40:32.569376Z INFO Daemon Daemon Activate resource disk Dec 12 17:40:32.573338 waagent[1999]: 2025-12-12T17:40:32.573310Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 12 17:40:32.581983 waagent[1999]: 2025-12-12T17:40:32.581946Z INFO Daemon Daemon Found device: None Dec 12 17:40:32.585749 waagent[1999]: 2025-12-12T17:40:32.585718Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 12 17:40:32.593344 waagent[1999]: 2025-12-12T17:40:32.593314Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 12 17:40:32.603136 waagent[1999]: 2025-12-12T17:40:32.603097Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 12 17:40:32.607679 waagent[1999]: 2025-12-12T17:40:32.607647Z INFO Daemon Daemon Running default provisioning handler Dec 12 17:40:32.617206 waagent[1999]: 2025-12-12T17:40:32.617147Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 12 17:40:32.628063 waagent[1999]: 2025-12-12T17:40:32.628023Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 12 17:40:32.635502 waagent[1999]: 2025-12-12T17:40:32.635471Z INFO Daemon Daemon cloud-init is enabled: False Dec 12 17:40:32.639478 waagent[1999]: 2025-12-12T17:40:32.639453Z INFO Daemon Daemon Copying ovf-env.xml Dec 12 17:40:32.728366 waagent[1999]: 2025-12-12T17:40:32.728237Z INFO Daemon Daemon Successfully mounted dvd Dec 12 17:40:32.773878 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 12 17:40:32.776179 waagent[1999]: 2025-12-12T17:40:32.776130Z INFO Daemon Daemon Detect protocol endpoint Dec 12 17:40:32.780567 waagent[1999]: 2025-12-12T17:40:32.780534Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 12 17:40:32.785219 waagent[1999]: 2025-12-12T17:40:32.785192Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 12 17:40:32.790677 waagent[1999]: 2025-12-12T17:40:32.790653Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 12 17:40:32.795776 waagent[1999]: 2025-12-12T17:40:32.795720Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 12 17:40:32.800333 waagent[1999]: 2025-12-12T17:40:32.800307Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 12 17:40:32.888823 waagent[1999]: 2025-12-12T17:40:32.888781Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 12 17:40:32.893958 waagent[1999]: 2025-12-12T17:40:32.893935Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 12 17:40:32.898211 waagent[1999]: 2025-12-12T17:40:32.898186Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 12 17:40:33.111259 waagent[1999]: 2025-12-12T17:40:33.110427Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 12 17:40:33.116167 waagent[1999]: 2025-12-12T17:40:33.116131Z INFO Daemon Daemon Forcing an update of the goal state. Dec 12 17:40:33.123891 waagent[1999]: 2025-12-12T17:40:33.123854Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 12 17:40:33.175695 waagent[1999]: 2025-12-12T17:40:33.175661Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 12 17:40:33.180408 waagent[1999]: 2025-12-12T17:40:33.180374Z INFO Daemon Dec 12 17:40:33.182811 waagent[1999]: 2025-12-12T17:40:33.182780Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 7c0346ff-f9fe-41fc-a247-49def4996d9c eTag: 14835736591340270404 source: Fabric] Dec 12 17:40:33.192158 waagent[1999]: 2025-12-12T17:40:33.192125Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 12 17:40:33.197780 waagent[1999]: 2025-12-12T17:40:33.197749Z INFO Daemon Dec 12 17:40:33.200206 waagent[1999]: 2025-12-12T17:40:33.200179Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 12 17:40:33.210743 waagent[1999]: 2025-12-12T17:40:33.210713Z INFO Daemon Daemon Downloading artifacts profile blob Dec 12 17:40:33.343928 waagent[1999]: 2025-12-12T17:40:33.343872Z INFO Daemon Downloaded certificate {'thumbprint': '1B85F6BC87B0E83856CF099FD2CAF67B4D5C0C7C', 'hasPrivateKey': True} Dec 12 17:40:33.352118 waagent[1999]: 2025-12-12T17:40:33.352079Z INFO Daemon Fetch goal state completed Dec 12 17:40:33.395314 waagent[1999]: 2025-12-12T17:40:33.395231Z INFO Daemon Daemon Starting provisioning Dec 12 17:40:33.399288 waagent[1999]: 2025-12-12T17:40:33.399252Z INFO Daemon Daemon Handle ovf-env.xml. Dec 12 17:40:33.403215 waagent[1999]: 2025-12-12T17:40:33.403188Z INFO Daemon Daemon Set hostname [ci-4459.2.2-a-c1c6b7e9cf] Dec 12 17:40:33.448670 waagent[1999]: 2025-12-12T17:40:33.448632Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-a-c1c6b7e9cf] Dec 12 17:40:33.453719 waagent[1999]: 2025-12-12T17:40:33.453685Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 12 17:40:33.458449 waagent[1999]: 2025-12-12T17:40:33.458418Z INFO Daemon Daemon Primary interface is [eth0] Dec 12 17:40:33.467729 systemd-networkd[1491]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:40:33.467947 systemd-networkd[1491]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:40:33.467996 systemd-networkd[1491]: eth0: DHCP lease lost Dec 12 17:40:33.468678 waagent[1999]: 2025-12-12T17:40:33.468640Z INFO Daemon Daemon Create user account if not exists Dec 12 17:40:33.473265 waagent[1999]: 2025-12-12T17:40:33.473232Z INFO Daemon Daemon User core already exists, skip useradd Dec 12 17:40:33.477559 waagent[1999]: 2025-12-12T17:40:33.477519Z INFO Daemon Daemon Configure sudoer Dec 12 17:40:33.484746 waagent[1999]: 2025-12-12T17:40:33.484705Z INFO Daemon Daemon Configure sshd Dec 12 17:40:33.492208 systemd-networkd[1491]: eth0: DHCPv4 address 10.200.20.14/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 12 17:40:33.493421 waagent[1999]: 2025-12-12T17:40:33.493380Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 12 17:40:33.503539 waagent[1999]: 2025-12-12T17:40:33.503508Z INFO Daemon Daemon Deploy ssh public key. Dec 12 17:40:34.589825 waagent[1999]: 2025-12-12T17:40:34.589771Z INFO Daemon Daemon Provisioning complete Dec 12 17:40:34.604264 waagent[1999]: 2025-12-12T17:40:34.604228Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 12 17:40:34.609865 waagent[1999]: 2025-12-12T17:40:34.609827Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 12 17:40:34.617694 waagent[1999]: 2025-12-12T17:40:34.617666Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 12 17:40:34.717266 waagent[2112]: 2025-12-12T17:40:34.717212Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 12 17:40:34.718216 waagent[2112]: 2025-12-12T17:40:34.717629Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Dec 12 17:40:34.718216 waagent[2112]: 2025-12-12T17:40:34.717684Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 12 17:40:34.718216 waagent[2112]: 2025-12-12T17:40:34.717718Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 12 17:40:34.789752 waagent[2112]: 2025-12-12T17:40:34.789696Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 12 17:40:34.789883 waagent[2112]: 2025-12-12T17:40:34.789855Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 12 17:40:34.789922 waagent[2112]: 2025-12-12T17:40:34.789904Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 12 17:40:34.795377 waagent[2112]: 2025-12-12T17:40:34.795332Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 12 17:40:34.800226 waagent[2112]: 2025-12-12T17:40:34.800193Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 12 17:40:34.800578 waagent[2112]: 2025-12-12T17:40:34.800546Z INFO ExtHandler Dec 12 17:40:34.800629 waagent[2112]: 2025-12-12T17:40:34.800613Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 84b3fb7c-7913-4f14-9781-b33dee5b6a22 eTag: 14835736591340270404 source: Fabric] Dec 12 17:40:34.800850 waagent[2112]: 2025-12-12T17:40:34.800823Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 12 17:40:34.801277 waagent[2112]: 2025-12-12T17:40:34.801246Z INFO ExtHandler Dec 12 17:40:34.801317 waagent[2112]: 2025-12-12T17:40:34.801301Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 12 17:40:34.804476 waagent[2112]: 2025-12-12T17:40:34.804451Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 12 17:40:34.855164 waagent[2112]: 2025-12-12T17:40:34.855066Z INFO ExtHandler Downloaded certificate {'thumbprint': '1B85F6BC87B0E83856CF099FD2CAF67B4D5C0C7C', 'hasPrivateKey': True} Dec 12 17:40:34.855492 waagent[2112]: 2025-12-12T17:40:34.855454Z INFO ExtHandler Fetch goal state completed Dec 12 17:40:34.868241 waagent[2112]: 2025-12-12T17:40:34.868196Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Dec 12 17:40:34.871402 waagent[2112]: 2025-12-12T17:40:34.871359Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2112 Dec 12 17:40:34.871499 waagent[2112]: 2025-12-12T17:40:34.871474Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 12 17:40:34.871735 waagent[2112]: 2025-12-12T17:40:34.871707Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 12 17:40:34.872802 waagent[2112]: 2025-12-12T17:40:34.872767Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Dec 12 17:40:34.873120 waagent[2112]: 2025-12-12T17:40:34.873089Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 12 17:40:34.873273 waagent[2112]: 2025-12-12T17:40:34.873245Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 12 17:40:34.873694 waagent[2112]: 2025-12-12T17:40:34.873664Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 12 17:40:34.950056 waagent[2112]: 2025-12-12T17:40:34.949759Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 12 17:40:34.950056 waagent[2112]: 2025-12-12T17:40:34.949893Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 12 17:40:34.953987 waagent[2112]: 2025-12-12T17:40:34.953968Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 12 17:40:34.958360 systemd[1]: Reload requested from client PID 2127 ('systemctl') (unit waagent.service)... Dec 12 17:40:34.958372 systemd[1]: Reloading... Dec 12 17:40:35.015193 zram_generator::config[2163]: No configuration found. Dec 12 17:40:35.170952 systemd[1]: Reloading finished in 212 ms. Dec 12 17:40:35.188198 waagent[2112]: 2025-12-12T17:40:35.187188Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 12 17:40:35.188198 waagent[2112]: 2025-12-12T17:40:35.187316Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 12 17:40:35.678457 waagent[2112]: 2025-12-12T17:40:35.678380Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 12 17:40:35.678716 waagent[2112]: 2025-12-12T17:40:35.678683Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 12 17:40:35.679344 waagent[2112]: 2025-12-12T17:40:35.679301Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 12 17:40:35.679665 waagent[2112]: 2025-12-12T17:40:35.679579Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 12 17:40:35.680410 waagent[2112]: 2025-12-12T17:40:35.679827Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 12 17:40:35.680410 waagent[2112]: 2025-12-12T17:40:35.679897Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 12 17:40:35.680410 waagent[2112]: 2025-12-12T17:40:35.680050Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 12 17:40:35.680410 waagent[2112]: 2025-12-12T17:40:35.680170Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 12 17:40:35.680410 waagent[2112]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 12 17:40:35.680410 waagent[2112]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 12 17:40:35.680410 waagent[2112]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 12 17:40:35.680410 waagent[2112]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 12 17:40:35.680410 waagent[2112]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 12 17:40:35.680410 waagent[2112]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 12 17:40:35.680677 waagent[2112]: 2025-12-12T17:40:35.680641Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 12 17:40:35.680732 waagent[2112]: 2025-12-12T17:40:35.680688Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 12 17:40:35.681074 waagent[2112]: 2025-12-12T17:40:35.681042Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 12 17:40:35.681114 waagent[2112]: 2025-12-12T17:40:35.681082Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 12 17:40:35.681277 waagent[2112]: 2025-12-12T17:40:35.681249Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 12 17:40:35.681628 waagent[2112]: 2025-12-12T17:40:35.681603Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 12 17:40:35.681725 waagent[2112]: 2025-12-12T17:40:35.681706Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 12 17:40:35.681881 waagent[2112]: 2025-12-12T17:40:35.681851Z INFO EnvHandler ExtHandler Configure routes Dec 12 17:40:35.685078 waagent[2112]: 2025-12-12T17:40:35.685041Z INFO EnvHandler ExtHandler Gateway:None Dec 12 17:40:35.685621 waagent[2112]: 2025-12-12T17:40:35.685590Z INFO EnvHandler ExtHandler Routes:None Dec 12 17:40:35.687366 waagent[2112]: 2025-12-12T17:40:35.687330Z INFO ExtHandler ExtHandler Dec 12 17:40:35.687413 waagent[2112]: 2025-12-12T17:40:35.687392Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: c0351e08-e9e0-458e-acbd-1df60d75f5b4 correlation 0e733a95-686a-46d1-b489-d0d449efd05d created: 2025-12-12T17:39:02.659660Z] Dec 12 17:40:35.687660 waagent[2112]: 2025-12-12T17:40:35.687628Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 12 17:40:35.688040 waagent[2112]: 2025-12-12T17:40:35.688014Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 12 17:40:35.718143 waagent[2112]: 2025-12-12T17:40:35.717801Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 12 17:40:35.718143 waagent[2112]: Try `iptables -h' or 'iptables --help' for more information.) Dec 12 17:40:35.718143 waagent[2112]: 2025-12-12T17:40:35.718079Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 75635F3F-9D4A-447E-86E5-4BE9171576FC;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 12 17:40:35.820983 waagent[2112]: 2025-12-12T17:40:35.820939Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 12 17:40:35.820983 waagent[2112]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 17:40:35.820983 waagent[2112]: pkts bytes target prot opt in out source destination Dec 12 17:40:35.820983 waagent[2112]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 12 17:40:35.820983 waagent[2112]: pkts bytes target prot opt in out source destination Dec 12 17:40:35.820983 waagent[2112]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 17:40:35.820983 waagent[2112]: pkts bytes target prot opt in out source destination Dec 12 17:40:35.820983 waagent[2112]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 12 17:40:35.820983 waagent[2112]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 12 17:40:35.820983 waagent[2112]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 12 17:40:35.823996 waagent[2112]: 2025-12-12T17:40:35.823965Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 12 17:40:35.823996 waagent[2112]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 17:40:35.823996 waagent[2112]: pkts bytes target prot opt in out source destination Dec 12 17:40:35.823996 waagent[2112]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 12 17:40:35.823996 waagent[2112]: pkts bytes target prot opt in out source destination Dec 12 17:40:35.823996 waagent[2112]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 12 17:40:35.823996 waagent[2112]: pkts bytes target prot opt in out source destination Dec 12 17:40:35.823996 waagent[2112]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 12 17:40:35.823996 waagent[2112]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 12 17:40:35.823996 waagent[2112]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 12 17:40:35.824387 waagent[2112]: 2025-12-12T17:40:35.824364Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 12 17:40:35.825261 waagent[2112]: 2025-12-12T17:40:35.825234Z INFO MonitorHandler ExtHandler Network interfaces: Dec 12 17:40:35.825261 waagent[2112]: Executing ['ip', '-a', '-o', 'link']: Dec 12 17:40:35.825261 waagent[2112]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 12 17:40:35.825261 waagent[2112]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bf:a1:bc brd ff:ff:ff:ff:ff:ff Dec 12 17:40:35.825261 waagent[2112]: 3: enP37909s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bf:a1:bc brd ff:ff:ff:ff:ff:ff\ altname enP37909p0s2 Dec 12 17:40:35.825261 waagent[2112]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 12 17:40:35.825261 waagent[2112]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 12 17:40:35.825261 waagent[2112]: 2: eth0 inet 10.200.20.14/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 12 17:40:35.825261 waagent[2112]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 12 17:40:35.825261 waagent[2112]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 12 17:40:35.825261 waagent[2112]: 2: eth0 inet6 fe80::222:48ff:febf:a1bc/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 12 17:40:40.208255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:40:40.209456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:40:40.308856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:40:40.318504 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:40:40.456760 kubelet[2262]: E1212 17:40:40.456718 2262 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:40:40.459396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:40:40.459503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:40:40.459761 systemd[1]: kubelet.service: Consumed 107ms CPU time, 105.3M memory peak. Dec 12 17:40:49.129187 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:40:49.131366 systemd[1]: Started sshd@0-10.200.20.14:22-10.200.16.10:53090.service - OpenSSH per-connection server daemon (10.200.16.10:53090). Dec 12 17:40:49.898456 sshd[2270]: Accepted publickey for core from 10.200.16.10 port 53090 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:40:49.899494 sshd-session[2270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:49.902951 systemd-logind[1864]: New session 3 of user core. Dec 12 17:40:49.914434 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:40:50.345512 systemd[1]: Started sshd@1-10.200.20.14:22-10.200.16.10:33506.service - OpenSSH per-connection server daemon (10.200.16.10:33506). Dec 12 17:40:50.708157 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:40:50.710021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:40:50.835893 sshd[2276]: Accepted publickey for core from 10.200.16.10 port 33506 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:40:50.837004 sshd-session[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:50.844616 systemd-logind[1864]: New session 4 of user core. Dec 12 17:40:50.850282 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:40:51.028323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:40:51.030833 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:40:51.053410 kubelet[2288]: E1212 17:40:51.053371 2288 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:40:51.055363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:40:51.055540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:40:51.057259 systemd[1]: kubelet.service: Consumed 99ms CPU time, 104.8M memory peak. Dec 12 17:40:51.184586 sshd[2282]: Connection closed by 10.200.16.10 port 33506 Dec 12 17:40:51.185206 sshd-session[2276]: pam_unix(sshd:session): session closed for user core Dec 12 17:40:51.187753 systemd[1]: sshd@1-10.200.20.14:22-10.200.16.10:33506.service: Deactivated successfully. Dec 12 17:40:51.189014 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:40:51.189744 systemd-logind[1864]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:40:51.190796 systemd-logind[1864]: Removed session 4. Dec 12 17:40:51.270324 systemd[1]: Started sshd@2-10.200.20.14:22-10.200.16.10:33516.service - OpenSSH per-connection server daemon (10.200.16.10:33516). Dec 12 17:40:51.757837 sshd[2300]: Accepted publickey for core from 10.200.16.10 port 33516 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:40:51.758890 sshd-session[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:51.762232 systemd-logind[1864]: New session 5 of user core. Dec 12 17:40:51.773280 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:40:52.104667 sshd[2303]: Connection closed by 10.200.16.10 port 33516 Dec 12 17:40:52.105051 sshd-session[2300]: pam_unix(sshd:session): session closed for user core Dec 12 17:40:52.107899 systemd[1]: sshd@2-10.200.20.14:22-10.200.16.10:33516.service: Deactivated successfully. Dec 12 17:40:52.109072 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:40:52.110232 systemd-logind[1864]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:40:52.111128 systemd-logind[1864]: Removed session 5. Dec 12 17:40:52.195096 systemd[1]: Started sshd@3-10.200.20.14:22-10.200.16.10:33526.service - OpenSSH per-connection server daemon (10.200.16.10:33526). Dec 12 17:40:52.688013 sshd[2309]: Accepted publickey for core from 10.200.16.10 port 33526 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:40:52.688984 sshd-session[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:52.692370 systemd-logind[1864]: New session 6 of user core. Dec 12 17:40:52.699511 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:40:52.769283 chronyd[1841]: Selected source PHC0 Dec 12 17:40:53.041400 sshd[2312]: Connection closed by 10.200.16.10 port 33526 Dec 12 17:40:53.041879 sshd-session[2309]: pam_unix(sshd:session): session closed for user core Dec 12 17:40:53.044714 systemd[1]: sshd@3-10.200.20.14:22-10.200.16.10:33526.service: Deactivated successfully. Dec 12 17:40:53.045906 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:40:53.046823 systemd-logind[1864]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:40:53.047777 systemd-logind[1864]: Removed session 6. Dec 12 17:40:53.128230 systemd[1]: Started sshd@4-10.200.20.14:22-10.200.16.10:33540.service - OpenSSH per-connection server daemon (10.200.16.10:33540). Dec 12 17:40:53.628921 sshd[2318]: Accepted publickey for core from 10.200.16.10 port 33540 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:40:53.630007 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:53.633444 systemd-logind[1864]: New session 7 of user core. Dec 12 17:40:53.640453 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:40:54.196211 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:40:54.196432 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:40:54.244431 sudo[2322]: pam_unix(sudo:session): session closed for user root Dec 12 17:40:54.320621 sshd[2321]: Connection closed by 10.200.16.10 port 33540 Dec 12 17:40:54.321159 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Dec 12 17:40:54.324099 systemd[1]: sshd@4-10.200.20.14:22-10.200.16.10:33540.service: Deactivated successfully. Dec 12 17:40:54.325613 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:40:54.326252 systemd-logind[1864]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:40:54.327352 systemd-logind[1864]: Removed session 7. Dec 12 17:40:54.407289 systemd[1]: Started sshd@5-10.200.20.14:22-10.200.16.10:33548.service - OpenSSH per-connection server daemon (10.200.16.10:33548). Dec 12 17:40:54.901873 sshd[2328]: Accepted publickey for core from 10.200.16.10 port 33548 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:40:54.902938 sshd-session[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:54.906269 systemd-logind[1864]: New session 8 of user core. Dec 12 17:40:54.917276 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:40:55.176162 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:40:55.176404 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:40:55.183103 sudo[2333]: pam_unix(sudo:session): session closed for user root Dec 12 17:40:55.186497 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:40:55.186683 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:40:55.193006 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:40:55.225180 augenrules[2355]: No rules Dec 12 17:40:55.226148 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:40:55.226320 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:40:55.229347 sudo[2332]: pam_unix(sudo:session): session closed for user root Dec 12 17:40:55.307633 sshd[2331]: Connection closed by 10.200.16.10 port 33548 Dec 12 17:40:55.308195 sshd-session[2328]: pam_unix(sshd:session): session closed for user core Dec 12 17:40:55.310547 systemd-logind[1864]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:40:55.311673 systemd[1]: sshd@5-10.200.20.14:22-10.200.16.10:33548.service: Deactivated successfully. Dec 12 17:40:55.313109 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:40:55.315056 systemd-logind[1864]: Removed session 8. Dec 12 17:40:55.396378 systemd[1]: Started sshd@6-10.200.20.14:22-10.200.16.10:33564.service - OpenSSH per-connection server daemon (10.200.16.10:33564). Dec 12 17:40:55.890474 sshd[2364]: Accepted publickey for core from 10.200.16.10 port 33564 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:40:55.891542 sshd-session[2364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:40:55.895094 systemd-logind[1864]: New session 9 of user core. Dec 12 17:40:55.903302 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:40:56.165597 sudo[2368]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:40:56.166316 sudo[2368]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:40:58.981869 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:40:58.992541 (dockerd)[2386]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:41:00.423481 dockerd[2386]: time="2025-12-12T17:41:00.423436005Z" level=info msg="Starting up" Dec 12 17:41:00.424367 dockerd[2386]: time="2025-12-12T17:41:00.424349157Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:41:00.432482 dockerd[2386]: time="2025-12-12T17:41:00.432408101Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:41:00.494536 dockerd[2386]: time="2025-12-12T17:41:00.494477581Z" level=info msg="Loading containers: start." Dec 12 17:41:00.542419 kernel: Initializing XFRM netlink socket Dec 12 17:41:00.979766 systemd-networkd[1491]: docker0: Link UP Dec 12 17:41:01.001751 dockerd[2386]: time="2025-12-12T17:41:01.001712897Z" level=info msg="Loading containers: done." Dec 12 17:41:01.019267 dockerd[2386]: time="2025-12-12T17:41:01.019235660Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:41:01.019381 dockerd[2386]: time="2025-12-12T17:41:01.019293731Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:41:01.019381 dockerd[2386]: time="2025-12-12T17:41:01.019364076Z" level=info msg="Initializing buildkit" Dec 12 17:41:01.063831 dockerd[2386]: time="2025-12-12T17:41:01.063791131Z" level=info msg="Completed buildkit initialization" Dec 12 17:41:01.067733 dockerd[2386]: time="2025-12-12T17:41:01.067630527Z" level=info msg="Daemon has completed initialization" Dec 12 17:41:01.067733 dockerd[2386]: time="2025-12-12T17:41:01.067670817Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:41:01.069125 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:41:01.070013 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 17:41:01.071702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:41:01.205090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:41:01.207663 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:41:01.273107 kubelet[2597]: E1212 17:41:01.273020 2597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:41:01.275511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:41:01.275710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:41:01.276135 systemd[1]: kubelet.service: Consumed 106ms CPU time, 105.4M memory peak. Dec 12 17:41:01.912712 containerd[1929]: time="2025-12-12T17:41:01.912633870Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 17:41:02.779968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133345294.mount: Deactivated successfully. Dec 12 17:41:04.181214 containerd[1929]: time="2025-12-12T17:41:04.181023840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:04.184008 containerd[1929]: time="2025-12-12T17:41:04.183980349Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Dec 12 17:41:04.188357 containerd[1929]: time="2025-12-12T17:41:04.188329564Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:04.193211 containerd[1929]: time="2025-12-12T17:41:04.193188799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:04.194001 containerd[1929]: time="2025-12-12T17:41:04.193975306Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.28130794s" Dec 12 17:41:04.194019 containerd[1929]: time="2025-12-12T17:41:04.194009067Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 12 17:41:04.195274 containerd[1929]: time="2025-12-12T17:41:04.195248496Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 17:41:05.655876 containerd[1929]: time="2025-12-12T17:41:05.655820656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:05.659280 containerd[1929]: time="2025-12-12T17:41:05.659247473Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Dec 12 17:41:05.663142 containerd[1929]: time="2025-12-12T17:41:05.663101612Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:05.667386 containerd[1929]: time="2025-12-12T17:41:05.667348329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:05.668209 containerd[1929]: time="2025-12-12T17:41:05.667949535Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.472675703s" Dec 12 17:41:05.668209 containerd[1929]: time="2025-12-12T17:41:05.667978656Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 12 17:41:05.668411 containerd[1929]: time="2025-12-12T17:41:05.668394178Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 17:41:06.964637 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 12 17:41:06.981021 containerd[1929]: time="2025-12-12T17:41:06.980975661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:06.983818 containerd[1929]: time="2025-12-12T17:41:06.983652812Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Dec 12 17:41:06.988902 containerd[1929]: time="2025-12-12T17:41:06.988873768Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:06.993513 containerd[1929]: time="2025-12-12T17:41:06.993480757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:06.994127 containerd[1929]: time="2025-12-12T17:41:06.994101020Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.325682729s" Dec 12 17:41:06.994227 containerd[1929]: time="2025-12-12T17:41:06.994214526Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 12 17:41:06.994816 containerd[1929]: time="2025-12-12T17:41:06.994797516Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 17:41:08.522350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115897211.mount: Deactivated successfully. Dec 12 17:41:08.792005 containerd[1929]: time="2025-12-12T17:41:08.791502348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:08.794127 containerd[1929]: time="2025-12-12T17:41:08.794102655Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Dec 12 17:41:08.797223 containerd[1929]: time="2025-12-12T17:41:08.797198638Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:08.800862 containerd[1929]: time="2025-12-12T17:41:08.800839163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:08.801198 containerd[1929]: time="2025-12-12T17:41:08.801055376Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.80616869s" Dec 12 17:41:08.801198 containerd[1929]: time="2025-12-12T17:41:08.801089865Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 12 17:41:08.801773 containerd[1929]: time="2025-12-12T17:41:08.801592094Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 17:41:09.549283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2225075737.mount: Deactivated successfully. Dec 12 17:41:10.588703 containerd[1929]: time="2025-12-12T17:41:10.588433126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:10.593259 containerd[1929]: time="2025-12-12T17:41:10.593225592Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Dec 12 17:41:10.597211 containerd[1929]: time="2025-12-12T17:41:10.596583182Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:10.600664 containerd[1929]: time="2025-12-12T17:41:10.600630469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:10.601338 containerd[1929]: time="2025-12-12T17:41:10.601311638Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.799697072s" Dec 12 17:41:10.601338 containerd[1929]: time="2025-12-12T17:41:10.601339127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 12 17:41:10.602257 containerd[1929]: time="2025-12-12T17:41:10.602239070Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:41:11.279645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808343337.mount: Deactivated successfully. Dec 12 17:41:11.280622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 12 17:41:11.282351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:41:11.298776 containerd[1929]: time="2025-12-12T17:41:11.298736292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:41:11.302420 containerd[1929]: time="2025-12-12T17:41:11.302394361Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Dec 12 17:41:11.309162 containerd[1929]: time="2025-12-12T17:41:11.309113893Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:41:11.319719 containerd[1929]: time="2025-12-12T17:41:11.319647258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:41:11.324498 containerd[1929]: time="2025-12-12T17:41:11.323754683Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 721.420739ms" Dec 12 17:41:11.324498 containerd[1929]: time="2025-12-12T17:41:11.323783235Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:41:11.325384 containerd[1929]: time="2025-12-12T17:41:11.325360652Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 17:41:11.381989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:41:11.384560 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:41:11.513185 kubelet[2747]: E1212 17:41:11.513123 2747 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:41:11.515346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:41:11.515548 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:41:11.516063 systemd[1]: kubelet.service: Consumed 104ms CPU time, 106.3M memory peak. Dec 12 17:41:13.073351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3256971949.mount: Deactivated successfully. Dec 12 17:41:14.762325 update_engine[1868]: I20251212 17:41:14.762204 1868 update_attempter.cc:509] Updating boot flags... Dec 12 17:41:15.690751 containerd[1929]: time="2025-12-12T17:41:15.690684132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:15.694918 containerd[1929]: time="2025-12-12T17:41:15.694868479Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Dec 12 17:41:15.698112 containerd[1929]: time="2025-12-12T17:41:15.698074152Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:15.703463 containerd[1929]: time="2025-12-12T17:41:15.703373944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:15.704122 containerd[1929]: time="2025-12-12T17:41:15.704091050Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 4.37870075s" Dec 12 17:41:15.704844 containerd[1929]: time="2025-12-12T17:41:15.704224854Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 12 17:41:19.302124 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:41:19.302689 systemd[1]: kubelet.service: Consumed 104ms CPU time, 106.3M memory peak. Dec 12 17:41:19.304310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:41:19.322998 systemd[1]: Reload requested from client PID 2952 ('systemctl') (unit session-9.scope)... Dec 12 17:41:19.323005 systemd[1]: Reloading... Dec 12 17:41:19.403204 zram_generator::config[3001]: No configuration found. Dec 12 17:41:19.555633 systemd[1]: Reloading finished in 232 ms. Dec 12 17:41:19.620508 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:41:19.620565 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:41:19.622261 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:41:19.622303 systemd[1]: kubelet.service: Consumed 73ms CPU time, 95.2M memory peak. Dec 12 17:41:19.624583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:41:19.827251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:41:19.829760 (kubelet)[3066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:41:19.955161 kubelet[3066]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:41:19.955161 kubelet[3066]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:41:19.955161 kubelet[3066]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:41:19.955468 kubelet[3066]: I1212 17:41:19.955215 3066 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:41:20.799502 kubelet[3066]: I1212 17:41:20.799460 3066 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:41:20.799502 kubelet[3066]: I1212 17:41:20.799491 3066 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:41:20.799698 kubelet[3066]: I1212 17:41:20.799671 3066 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:41:20.824194 kubelet[3066]: E1212 17:41:20.823631 3066 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:41:20.824653 kubelet[3066]: I1212 17:41:20.824631 3066 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:41:20.830533 kubelet[3066]: I1212 17:41:20.830508 3066 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:41:20.832900 kubelet[3066]: I1212 17:41:20.832884 3066 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:41:20.833966 kubelet[3066]: I1212 17:41:20.833938 3066 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:41:20.834075 kubelet[3066]: I1212 17:41:20.833966 3066 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-c1c6b7e9cf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:41:20.834141 kubelet[3066]: I1212 17:41:20.834079 3066 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:41:20.834141 kubelet[3066]: I1212 17:41:20.834086 3066 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:41:20.834682 kubelet[3066]: I1212 17:41:20.834664 3066 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:41:20.836961 kubelet[3066]: I1212 17:41:20.836945 3066 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:41:20.836994 kubelet[3066]: I1212 17:41:20.836964 3066 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:41:20.836994 kubelet[3066]: I1212 17:41:20.836985 3066 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:41:20.838138 kubelet[3066]: I1212 17:41:20.837840 3066 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:41:20.841744 kubelet[3066]: E1212 17:41:20.841722 3066 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-c1c6b7e9cf&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:41:20.842526 kubelet[3066]: I1212 17:41:20.842509 3066 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:41:20.842997 kubelet[3066]: I1212 17:41:20.842918 3066 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:41:20.843151 kubelet[3066]: W1212 17:41:20.843139 3066 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:41:20.843830 kubelet[3066]: E1212 17:41:20.843791 3066 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:41:20.845225 kubelet[3066]: I1212 17:41:20.845206 3066 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:41:20.845286 kubelet[3066]: I1212 17:41:20.845243 3066 server.go:1289] "Started kubelet" Dec 12 17:41:20.845674 kubelet[3066]: I1212 17:41:20.845335 3066 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:41:20.846109 kubelet[3066]: I1212 17:41:20.846097 3066 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:41:20.847906 kubelet[3066]: I1212 17:41:20.847404 3066 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:41:20.847906 kubelet[3066]: I1212 17:41:20.847654 3066 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:41:20.848550 kubelet[3066]: E1212 17:41:20.847733 3066 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-c1c6b7e9cf.188088a16df68bce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-c1c6b7e9cf,UID:ci-4459.2.2-a-c1c6b7e9cf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-c1c6b7e9cf,},FirstTimestamp:2025-12-12 17:41:20.845220814 +0000 UTC m=+1.012714623,LastTimestamp:2025-12-12 17:41:20.845220814 +0000 UTC m=+1.012714623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-c1c6b7e9cf,}" Dec 12 17:41:20.850418 kubelet[3066]: I1212 17:41:20.850391 3066 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:41:20.850850 kubelet[3066]: I1212 17:41:20.850830 3066 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:41:20.854724 kubelet[3066]: E1212 17:41:20.854698 3066 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:41:20.857593 kubelet[3066]: E1212 17:41:20.857568 3066 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" Dec 12 17:41:20.858438 kubelet[3066]: I1212 17:41:20.858416 3066 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:41:20.858496 kubelet[3066]: I1212 17:41:20.858481 3066 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:41:20.858527 kubelet[3066]: I1212 17:41:20.858515 3066 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:41:20.859109 kubelet[3066]: E1212 17:41:20.859055 3066 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:41:20.859191 kubelet[3066]: E1212 17:41:20.859107 3066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-c1c6b7e9cf?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="200ms" Dec 12 17:41:20.859320 kubelet[3066]: I1212 17:41:20.859298 3066 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:41:20.860322 kubelet[3066]: I1212 17:41:20.859355 3066 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:41:20.861005 kubelet[3066]: I1212 17:41:20.860989 3066 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:41:20.888065 kubelet[3066]: I1212 17:41:20.887885 3066 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:41:20.888065 kubelet[3066]: I1212 17:41:20.887900 3066 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:41:20.888065 kubelet[3066]: I1212 17:41:20.887914 3066 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:41:20.958319 kubelet[3066]: E1212 17:41:20.958290 3066 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" Dec 12 17:41:21.058761 kubelet[3066]: E1212 17:41:21.058649 3066 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" Dec 12 17:41:21.060078 kubelet[3066]: E1212 17:41:21.060051 3066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-c1c6b7e9cf?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="400ms" Dec 12 17:41:21.159298 kubelet[3066]: E1212 17:41:21.159271 3066 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" Dec 12 17:41:21.225853 kubelet[3066]: E1212 17:41:21.225760 3066 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-c1c6b7e9cf.188088a16df68bce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-c1c6b7e9cf,UID:ci-4459.2.2-a-c1c6b7e9cf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-c1c6b7e9cf,},FirstTimestamp:2025-12-12 17:41:20.845220814 +0000 UTC m=+1.012714623,LastTimestamp:2025-12-12 17:41:20.845220814 +0000 UTC m=+1.012714623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-c1c6b7e9cf,}" Dec 12 17:41:21.260124 kubelet[3066]: E1212 17:41:21.260101 3066 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" Dec 12 17:41:21.303009 kubelet[3066]: I1212 17:41:21.302992 3066 policy_none.go:49] "None policy: Start" Dec 12 17:41:21.303009 kubelet[3066]: I1212 17:41:21.303015 3066 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:41:21.303092 kubelet[3066]: I1212 17:41:21.303026 3066 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:41:21.343028 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:41:21.353557 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:41:21.356401 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:41:21.360710 kubelet[3066]: E1212 17:41:21.360687 3066 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" Dec 12 17:41:21.361939 kubelet[3066]: E1212 17:41:21.361904 3066 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:41:21.362141 kubelet[3066]: I1212 17:41:21.362117 3066 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:41:21.362186 kubelet[3066]: I1212 17:41:21.362136 3066 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:41:21.362914 kubelet[3066]: I1212 17:41:21.362896 3066 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:41:21.364600 kubelet[3066]: I1212 17:41:21.364415 3066 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:41:21.364600 kubelet[3066]: I1212 17:41:21.364430 3066 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:41:21.364600 kubelet[3066]: I1212 17:41:21.364446 3066 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:41:21.364600 kubelet[3066]: I1212 17:41:21.364451 3066 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:41:21.364600 kubelet[3066]: E1212 17:41:21.364482 3066 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 12 17:41:21.366283 kubelet[3066]: I1212 17:41:21.366169 3066 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:41:21.367385 kubelet[3066]: E1212 17:41:21.367147 3066 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:41:21.369066 kubelet[3066]: E1212 17:41:21.369040 3066 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:41:21.369171 kubelet[3066]: E1212 17:41:21.369079 3066 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" Dec 12 17:41:21.460588 kubelet[3066]: E1212 17:41:21.460562 3066 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-c1c6b7e9cf?timeout=10s\": dial tcp 10.200.20.14:6443: connect: connection refused" interval="800ms" Dec 12 17:41:21.465069 kubelet[3066]: I1212 17:41:21.464858 3066 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.465157 kubelet[3066]: E1212 17:41:21.465089 3066 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.477208 systemd[1]: Created slice kubepods-burstable-podc036006f0ce0a667629595341559a1bc.slice - libcontainer container kubepods-burstable-podc036006f0ce0a667629595341559a1bc.slice. Dec 12 17:41:21.486007 kubelet[3066]: E1212 17:41:21.485920 3066 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.489519 systemd[1]: Created slice kubepods-burstable-pod55e6174d4c1834fa571ac2ebf46e65d6.slice - libcontainer container kubepods-burstable-pod55e6174d4c1834fa571ac2ebf46e65d6.slice. Dec 12 17:41:21.490874 kubelet[3066]: E1212 17:41:21.490849 3066 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.497591 systemd[1]: Created slice kubepods-burstable-pod243dd2f0c819b2d59bcd714f49b18249.slice - libcontainer container kubepods-burstable-pod243dd2f0c819b2d59bcd714f49b18249.slice. Dec 12 17:41:21.499029 kubelet[3066]: E1212 17:41:21.499008 3066 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.563362 kubelet[3066]: I1212 17:41:21.563318 3066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c036006f0ce0a667629595341559a1bc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"c036006f0ce0a667629595341559a1bc\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.563362 kubelet[3066]: I1212 17:41:21.563347 3066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55e6174d4c1834fa571ac2ebf46e65d6-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"55e6174d4c1834fa571ac2ebf46e65d6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.563362 kubelet[3066]: I1212 17:41:21.563365 3066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55e6174d4c1834fa571ac2ebf46e65d6-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"55e6174d4c1834fa571ac2ebf46e65d6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.563475 kubelet[3066]: I1212 17:41:21.563379 3066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c036006f0ce0a667629595341559a1bc-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"c036006f0ce0a667629595341559a1bc\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.563475 kubelet[3066]: I1212 17:41:21.563388 3066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c036006f0ce0a667629595341559a1bc-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"c036006f0ce0a667629595341559a1bc\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.563475 kubelet[3066]: I1212 17:41:21.563397 3066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55e6174d4c1834fa571ac2ebf46e65d6-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"55e6174d4c1834fa571ac2ebf46e65d6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.563475 kubelet[3066]: I1212 17:41:21.563406 3066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55e6174d4c1834fa571ac2ebf46e65d6-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"55e6174d4c1834fa571ac2ebf46e65d6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.563475 kubelet[3066]: I1212 17:41:21.563416 3066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55e6174d4c1834fa571ac2ebf46e65d6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"55e6174d4c1834fa571ac2ebf46e65d6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.563550 kubelet[3066]: I1212 17:41:21.563425 3066 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/243dd2f0c819b2d59bcd714f49b18249-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"243dd2f0c819b2d59bcd714f49b18249\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.667256 kubelet[3066]: I1212 17:41:21.667146 3066 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.668003 kubelet[3066]: E1212 17:41:21.667977 3066 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:21.787315 containerd[1929]: time="2025-12-12T17:41:21.787278246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf,Uid:c036006f0ce0a667629595341559a1bc,Namespace:kube-system,Attempt:0,}" Dec 12 17:41:21.792476 containerd[1929]: time="2025-12-12T17:41:21.792433141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf,Uid:55e6174d4c1834fa571ac2ebf46e65d6,Namespace:kube-system,Attempt:0,}" Dec 12 17:41:21.800193 containerd[1929]: time="2025-12-12T17:41:21.800075333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf,Uid:243dd2f0c819b2d59bcd714f49b18249,Namespace:kube-system,Attempt:0,}" Dec 12 17:41:21.865088 containerd[1929]: time="2025-12-12T17:41:21.865025731Z" level=info msg="connecting to shim 8b3c8b8980a8a7cb8464a5ceb1132cac922539077d92294d2471ec3909bdec71" address="unix:///run/containerd/s/25ac59661bef527c4d48b218af73283cf9be5cd8ebc32c2ea5fb7e8f8f728ddf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:21.886311 systemd[1]: Started cri-containerd-8b3c8b8980a8a7cb8464a5ceb1132cac922539077d92294d2471ec3909bdec71.scope - libcontainer container 8b3c8b8980a8a7cb8464a5ceb1132cac922539077d92294d2471ec3909bdec71. Dec 12 17:41:21.897082 containerd[1929]: time="2025-12-12T17:41:21.896954762Z" level=info msg="connecting to shim c782661f4eccb63701594a298f0c7f9417d5a08ac0388eb1981084b99044dc70" address="unix:///run/containerd/s/e561dfbf5d877845abf9d062ef0f1b33f6a3209ee4e2855ad866975f614bcf1e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:21.898242 containerd[1929]: time="2025-12-12T17:41:21.898218767Z" level=info msg="connecting to shim ed32dab0f0075a1b6b926bb9d35e9b4ba316f24b55d770df5556b1c49298586f" address="unix:///run/containerd/s/9406db263a26ec1427e34fcb86165e5d95af9ded1fadcc7c2d9670c6c36b5a23" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:21.926326 systemd[1]: Started cri-containerd-c782661f4eccb63701594a298f0c7f9417d5a08ac0388eb1981084b99044dc70.scope - libcontainer container c782661f4eccb63701594a298f0c7f9417d5a08ac0388eb1981084b99044dc70. Dec 12 17:41:21.930617 systemd[1]: Started cri-containerd-ed32dab0f0075a1b6b926bb9d35e9b4ba316f24b55d770df5556b1c49298586f.scope - libcontainer container ed32dab0f0075a1b6b926bb9d35e9b4ba316f24b55d770df5556b1c49298586f. Dec 12 17:41:21.945995 containerd[1929]: time="2025-12-12T17:41:21.945925408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf,Uid:c036006f0ce0a667629595341559a1bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b3c8b8980a8a7cb8464a5ceb1132cac922539077d92294d2471ec3909bdec71\"" Dec 12 17:41:21.957569 containerd[1929]: time="2025-12-12T17:41:21.957503155Z" level=info msg="CreateContainer within sandbox \"8b3c8b8980a8a7cb8464a5ceb1132cac922539077d92294d2471ec3909bdec71\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:41:21.980534 containerd[1929]: time="2025-12-12T17:41:21.980511988Z" level=info msg="Container a00d5b60288d2a0440e72c83ab483a87d1cab363cb4a8658cbd904ed2b693782: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:21.983768 containerd[1929]: time="2025-12-12T17:41:21.983743823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf,Uid:243dd2f0c819b2d59bcd714f49b18249,Namespace:kube-system,Attempt:0,} returns sandbox id \"c782661f4eccb63701594a298f0c7f9417d5a08ac0388eb1981084b99044dc70\"" Dec 12 17:41:21.990853 containerd[1929]: time="2025-12-12T17:41:21.990820489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf,Uid:55e6174d4c1834fa571ac2ebf46e65d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed32dab0f0075a1b6b926bb9d35e9b4ba316f24b55d770df5556b1c49298586f\"" Dec 12 17:41:21.991958 containerd[1929]: time="2025-12-12T17:41:21.991929987Z" level=info msg="CreateContainer within sandbox \"c782661f4eccb63701594a298f0c7f9417d5a08ac0388eb1981084b99044dc70\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:41:22.004592 containerd[1929]: time="2025-12-12T17:41:22.004558341Z" level=info msg="CreateContainer within sandbox \"8b3c8b8980a8a7cb8464a5ceb1132cac922539077d92294d2471ec3909bdec71\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a00d5b60288d2a0440e72c83ab483a87d1cab363cb4a8658cbd904ed2b693782\"" Dec 12 17:41:22.005022 containerd[1929]: time="2025-12-12T17:41:22.005000224Z" level=info msg="StartContainer for \"a00d5b60288d2a0440e72c83ab483a87d1cab363cb4a8658cbd904ed2b693782\"" Dec 12 17:41:22.005728 containerd[1929]: time="2025-12-12T17:41:22.005701808Z" level=info msg="connecting to shim a00d5b60288d2a0440e72c83ab483a87d1cab363cb4a8658cbd904ed2b693782" address="unix:///run/containerd/s/25ac59661bef527c4d48b218af73283cf9be5cd8ebc32c2ea5fb7e8f8f728ddf" protocol=ttrpc version=3 Dec 12 17:41:22.009792 containerd[1929]: time="2025-12-12T17:41:22.009552232Z" level=info msg="CreateContainer within sandbox \"ed32dab0f0075a1b6b926bb9d35e9b4ba316f24b55d770df5556b1c49298586f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:41:22.022292 systemd[1]: Started cri-containerd-a00d5b60288d2a0440e72c83ab483a87d1cab363cb4a8658cbd904ed2b693782.scope - libcontainer container a00d5b60288d2a0440e72c83ab483a87d1cab363cb4a8658cbd904ed2b693782. Dec 12 17:41:22.027054 containerd[1929]: time="2025-12-12T17:41:22.027027170Z" level=info msg="Container 70744f8a1fe0dcf93cdbe7285deff414d6ece20316eebefb542a299f4adf4f04: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:22.033908 containerd[1929]: time="2025-12-12T17:41:22.033880784Z" level=info msg="Container 71f4295f35ba47dde84e295f5669f5aea6c826926f53dbf3d2e3c9e80fdca49b: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:22.056207 containerd[1929]: time="2025-12-12T17:41:22.056035510Z" level=info msg="CreateContainer within sandbox \"ed32dab0f0075a1b6b926bb9d35e9b4ba316f24b55d770df5556b1c49298586f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"71f4295f35ba47dde84e295f5669f5aea6c826926f53dbf3d2e3c9e80fdca49b\"" Dec 12 17:41:22.060449 containerd[1929]: time="2025-12-12T17:41:22.060339201Z" level=info msg="StartContainer for \"71f4295f35ba47dde84e295f5669f5aea6c826926f53dbf3d2e3c9e80fdca49b\"" Dec 12 17:41:22.062431 containerd[1929]: time="2025-12-12T17:41:22.062153482Z" level=info msg="connecting to shim 71f4295f35ba47dde84e295f5669f5aea6c826926f53dbf3d2e3c9e80fdca49b" address="unix:///run/containerd/s/9406db263a26ec1427e34fcb86165e5d95af9ded1fadcc7c2d9670c6c36b5a23" protocol=ttrpc version=3 Dec 12 17:41:22.068794 containerd[1929]: time="2025-12-12T17:41:22.068768499Z" level=info msg="StartContainer for \"a00d5b60288d2a0440e72c83ab483a87d1cab363cb4a8658cbd904ed2b693782\" returns successfully" Dec 12 17:41:22.070674 containerd[1929]: time="2025-12-12T17:41:22.070637814Z" level=info msg="CreateContainer within sandbox \"c782661f4eccb63701594a298f0c7f9417d5a08ac0388eb1981084b99044dc70\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"70744f8a1fe0dcf93cdbe7285deff414d6ece20316eebefb542a299f4adf4f04\"" Dec 12 17:41:22.071867 kubelet[3066]: I1212 17:41:22.071843 3066 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:22.072144 kubelet[3066]: E1212 17:41:22.072114 3066 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.14:6443/api/v1/nodes\": dial tcp 10.200.20.14:6443: connect: connection refused" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:22.073520 containerd[1929]: time="2025-12-12T17:41:22.073484303Z" level=info msg="StartContainer for \"70744f8a1fe0dcf93cdbe7285deff414d6ece20316eebefb542a299f4adf4f04\"" Dec 12 17:41:22.075519 containerd[1929]: time="2025-12-12T17:41:22.075406443Z" level=info msg="connecting to shim 70744f8a1fe0dcf93cdbe7285deff414d6ece20316eebefb542a299f4adf4f04" address="unix:///run/containerd/s/e561dfbf5d877845abf9d062ef0f1b33f6a3209ee4e2855ad866975f614bcf1e" protocol=ttrpc version=3 Dec 12 17:41:22.096376 systemd[1]: Started cri-containerd-71f4295f35ba47dde84e295f5669f5aea6c826926f53dbf3d2e3c9e80fdca49b.scope - libcontainer container 71f4295f35ba47dde84e295f5669f5aea6c826926f53dbf3d2e3c9e80fdca49b. Dec 12 17:41:22.099399 systemd[1]: Started cri-containerd-70744f8a1fe0dcf93cdbe7285deff414d6ece20316eebefb542a299f4adf4f04.scope - libcontainer container 70744f8a1fe0dcf93cdbe7285deff414d6ece20316eebefb542a299f4adf4f04. Dec 12 17:41:22.155352 containerd[1929]: time="2025-12-12T17:41:22.154307083Z" level=info msg="StartContainer for \"71f4295f35ba47dde84e295f5669f5aea6c826926f53dbf3d2e3c9e80fdca49b\" returns successfully" Dec 12 17:41:22.170435 containerd[1929]: time="2025-12-12T17:41:22.170321075Z" level=info msg="StartContainer for \"70744f8a1fe0dcf93cdbe7285deff414d6ece20316eebefb542a299f4adf4f04\" returns successfully" Dec 12 17:41:22.374320 kubelet[3066]: E1212 17:41:22.374293 3066 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:22.377287 kubelet[3066]: E1212 17:41:22.377268 3066 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:22.378256 kubelet[3066]: E1212 17:41:22.378240 3066 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:22.874858 kubelet[3066]: I1212 17:41:22.874818 3066 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:23.296035 kubelet[3066]: E1212 17:41:23.295998 3066 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-a-c1c6b7e9cf\" not found" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:23.321445 kubelet[3066]: I1212 17:41:23.321422 3066 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:23.359217 kubelet[3066]: I1212 17:41:23.359193 3066 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:23.378096 kubelet[3066]: I1212 17:41:23.378076 3066 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:23.379249 kubelet[3066]: I1212 17:41:23.378336 3066 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:23.386209 kubelet[3066]: E1212 17:41:23.386184 3066 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:23.386401 kubelet[3066]: E1212 17:41:23.386384 3066 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:23.412401 kubelet[3066]: E1212 17:41:23.412375 3066 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:23.412401 kubelet[3066]: I1212 17:41:23.412398 3066 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:23.413868 kubelet[3066]: E1212 17:41:23.413848 3066 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:23.413868 kubelet[3066]: I1212 17:41:23.413864 3066 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:23.416097 kubelet[3066]: E1212 17:41:23.416072 3066 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:23.844465 kubelet[3066]: I1212 17:41:23.844433 3066 apiserver.go:52] "Watching apiserver" Dec 12 17:41:23.858745 kubelet[3066]: I1212 17:41:23.858723 3066 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:41:24.380020 kubelet[3066]: I1212 17:41:24.379984 3066 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:24.380612 kubelet[3066]: I1212 17:41:24.380142 3066 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:24.393761 kubelet[3066]: I1212 17:41:24.393739 3066 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 17:41:24.394672 kubelet[3066]: I1212 17:41:24.394649 3066 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 17:41:25.376362 systemd[1]: Reload requested from client PID 3349 ('systemctl') (unit session-9.scope)... Dec 12 17:41:25.376379 systemd[1]: Reloading... Dec 12 17:41:25.451734 zram_generator::config[3396]: No configuration found. Dec 12 17:41:25.610722 systemd[1]: Reloading finished in 234 ms. Dec 12 17:41:25.643722 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:41:25.660836 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:41:25.661057 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:41:25.661114 systemd[1]: kubelet.service: Consumed 1.060s CPU time, 127.9M memory peak. Dec 12 17:41:25.662994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:41:25.754733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:41:25.761447 (kubelet)[3460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:41:25.798250 kubelet[3460]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:41:25.798250 kubelet[3460]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:41:25.798250 kubelet[3460]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:41:25.798494 kubelet[3460]: I1212 17:41:25.798291 3460 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:41:25.804429 kubelet[3460]: I1212 17:41:25.804402 3460 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:41:25.804429 kubelet[3460]: I1212 17:41:25.804425 3460 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:41:25.804584 kubelet[3460]: I1212 17:41:25.804564 3460 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:41:25.806370 kubelet[3460]: I1212 17:41:25.806347 3460 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:41:25.808203 kubelet[3460]: I1212 17:41:25.807954 3460 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:41:25.814036 kubelet[3460]: I1212 17:41:25.814021 3460 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:41:25.817120 kubelet[3460]: I1212 17:41:25.817081 3460 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:41:25.817482 kubelet[3460]: I1212 17:41:25.817465 3460 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:41:25.817643 kubelet[3460]: I1212 17:41:25.817545 3460 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-c1c6b7e9cf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:41:25.817838 kubelet[3460]: I1212 17:41:25.817768 3460 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:41:25.817838 kubelet[3460]: I1212 17:41:25.817783 3460 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:41:25.817838 kubelet[3460]: I1212 17:41:25.817818 3460 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:41:25.818218 kubelet[3460]: I1212 17:41:25.818204 3460 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:41:25.818630 kubelet[3460]: I1212 17:41:25.818460 3460 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:41:25.818630 kubelet[3460]: I1212 17:41:25.818495 3460 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:41:25.818630 kubelet[3460]: I1212 17:41:25.818503 3460 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:41:25.820410 kubelet[3460]: I1212 17:41:25.820389 3460 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:41:25.820410 kubelet[3460]: I1212 17:41:25.820730 3460 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:41:25.823850 kubelet[3460]: I1212 17:41:25.823831 3460 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:41:25.823911 kubelet[3460]: I1212 17:41:25.823865 3460 server.go:1289] "Started kubelet" Dec 12 17:41:25.823982 kubelet[3460]: I1212 17:41:25.823963 3460 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:41:25.824135 kubelet[3460]: I1212 17:41:25.824099 3460 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:41:25.824331 kubelet[3460]: I1212 17:41:25.824315 3460 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:41:25.824793 kubelet[3460]: I1212 17:41:25.824730 3460 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:41:25.826638 kubelet[3460]: I1212 17:41:25.826103 3460 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:41:25.827404 kubelet[3460]: I1212 17:41:25.827148 3460 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:41:25.841564 kubelet[3460]: I1212 17:41:25.841548 3460 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:41:25.842106 kubelet[3460]: E1212 17:41:25.841781 3460 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-c1c6b7e9cf\" not found" Dec 12 17:41:25.844677 kubelet[3460]: I1212 17:41:25.844660 3460 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:41:25.845232 kubelet[3460]: I1212 17:41:25.845112 3460 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:41:25.848823 kubelet[3460]: I1212 17:41:25.848678 3460 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:41:25.851231 kubelet[3460]: I1212 17:41:25.850282 3460 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:41:25.851529 kubelet[3460]: I1212 17:41:25.851322 3460 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:41:25.851529 kubelet[3460]: I1212 17:41:25.851343 3460 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:41:25.851529 kubelet[3460]: I1212 17:41:25.851348 3460 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:41:25.851529 kubelet[3460]: E1212 17:41:25.851380 3460 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:41:25.851529 kubelet[3460]: I1212 17:41:25.851455 3460 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:41:25.851529 kubelet[3460]: I1212 17:41:25.851525 3460 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:41:25.858521 kubelet[3460]: I1212 17:41:25.858506 3460 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:41:25.860690 kubelet[3460]: E1212 17:41:25.860668 3460 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:41:25.906110 kubelet[3460]: I1212 17:41:25.905408 3460 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:41:25.906110 kubelet[3460]: I1212 17:41:25.905423 3460 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:41:25.906110 kubelet[3460]: I1212 17:41:25.905441 3460 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:41:25.906485 kubelet[3460]: I1212 17:41:25.906340 3460 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:41:25.907298 kubelet[3460]: I1212 17:41:25.906566 3460 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:41:25.907298 kubelet[3460]: I1212 17:41:25.906598 3460 policy_none.go:49] "None policy: Start" Dec 12 17:41:25.907298 kubelet[3460]: I1212 17:41:25.906606 3460 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:41:25.907298 kubelet[3460]: I1212 17:41:25.906618 3460 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:41:25.907298 kubelet[3460]: I1212 17:41:25.906707 3460 state_mem.go:75] "Updated machine memory state" Dec 12 17:41:25.913380 kubelet[3460]: E1212 17:41:25.913366 3460 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:41:25.914979 kubelet[3460]: I1212 17:41:25.914967 3460 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:41:25.915500 kubelet[3460]: I1212 17:41:25.915443 3460 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:41:25.915929 kubelet[3460]: I1212 17:41:25.915911 3460 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:41:25.919312 kubelet[3460]: E1212 17:41:25.919071 3460 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:41:25.952469 kubelet[3460]: I1212 17:41:25.952443 3460 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:25.953008 kubelet[3460]: I1212 17:41:25.952668 3460 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:25.953260 kubelet[3460]: I1212 17:41:25.952767 3460 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:25.965705 kubelet[3460]: I1212 17:41:25.965686 3460 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 17:41:25.965829 kubelet[3460]: E1212 17:41:25.965818 3460 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:25.966000 kubelet[3460]: I1212 17:41:25.965769 3460 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 17:41:25.966374 kubelet[3460]: I1212 17:41:25.966351 3460 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 17:41:25.966445 kubelet[3460]: E1212 17:41:25.966386 3460 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.018485 kubelet[3460]: I1212 17:41:26.018455 3460 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.039864 kubelet[3460]: I1212 17:41:26.039768 3460 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.039984 kubelet[3460]: I1212 17:41:26.039975 3460 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.048632 kubelet[3460]: I1212 17:41:26.048547 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c036006f0ce0a667629595341559a1bc-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"c036006f0ce0a667629595341559a1bc\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.048978 kubelet[3460]: I1212 17:41:26.048852 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c036006f0ce0a667629595341559a1bc-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"c036006f0ce0a667629595341559a1bc\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.048978 kubelet[3460]: I1212 17:41:26.048879 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55e6174d4c1834fa571ac2ebf46e65d6-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"55e6174d4c1834fa571ac2ebf46e65d6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.050029 kubelet[3460]: I1212 17:41:26.049594 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55e6174d4c1834fa571ac2ebf46e65d6-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"55e6174d4c1834fa571ac2ebf46e65d6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.050029 kubelet[3460]: I1212 17:41:26.049626 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55e6174d4c1834fa571ac2ebf46e65d6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"55e6174d4c1834fa571ac2ebf46e65d6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.050029 kubelet[3460]: I1212 17:41:26.049640 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55e6174d4c1834fa571ac2ebf46e65d6-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"55e6174d4c1834fa571ac2ebf46e65d6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.050029 kubelet[3460]: I1212 17:41:26.049655 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55e6174d4c1834fa571ac2ebf46e65d6-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"55e6174d4c1834fa571ac2ebf46e65d6\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.050029 kubelet[3460]: I1212 17:41:26.049678 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/243dd2f0c819b2d59bcd714f49b18249-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"243dd2f0c819b2d59bcd714f49b18249\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.050158 kubelet[3460]: I1212 17:41:26.049689 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c036006f0ce0a667629595341559a1bc-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf\" (UID: \"c036006f0ce0a667629595341559a1bc\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.819433 kubelet[3460]: I1212 17:41:26.819399 3460 apiserver.go:52] "Watching apiserver" Dec 12 17:41:26.845425 kubelet[3460]: I1212 17:41:26.845391 3460 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:41:26.891310 kubelet[3460]: I1212 17:41:26.890876 3460 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.898974 kubelet[3460]: I1212 17:41:26.898241 3460 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 12 17:41:26.899213 kubelet[3460]: E1212 17:41:26.899074 3460 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:41:26.915863 kubelet[3460]: I1212 17:41:26.915783 3460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-c1c6b7e9cf" podStartSLOduration=1.915773532 podStartE2EDuration="1.915773532s" podCreationTimestamp="2025-12-12 17:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:41:26.906215245 +0000 UTC m=+1.141632916" watchObservedRunningTime="2025-12-12 17:41:26.915773532 +0000 UTC m=+1.151191163" Dec 12 17:41:26.916097 kubelet[3460]: I1212 17:41:26.916064 3460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-a-c1c6b7e9cf" podStartSLOduration=2.916056458 podStartE2EDuration="2.916056458s" podCreationTimestamp="2025-12-12 17:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:41:26.915653794 +0000 UTC m=+1.151071425" watchObservedRunningTime="2025-12-12 17:41:26.916056458 +0000 UTC m=+1.151474089" Dec 12 17:41:26.936543 kubelet[3460]: I1212 17:41:26.936498 3460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-a-c1c6b7e9cf" podStartSLOduration=2.936488571 podStartE2EDuration="2.936488571s" podCreationTimestamp="2025-12-12 17:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:41:26.926126915 +0000 UTC m=+1.161544570" watchObservedRunningTime="2025-12-12 17:41:26.936488571 +0000 UTC m=+1.171906210" Dec 12 17:41:30.554703 kubelet[3460]: I1212 17:41:30.554582 3460 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:41:30.555111 containerd[1929]: time="2025-12-12T17:41:30.554851651Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:41:30.555436 kubelet[3460]: I1212 17:41:30.555410 3460 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:41:31.215592 systemd[1]: Created slice kubepods-besteffort-pod55e440a5_54d6_41fe_9c5a_03729c0e1b0d.slice - libcontainer container kubepods-besteffort-pod55e440a5_54d6_41fe_9c5a_03729c0e1b0d.slice. Dec 12 17:41:31.283097 kubelet[3460]: I1212 17:41:31.283069 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/55e440a5-54d6-41fe-9c5a-03729c0e1b0d-kube-proxy\") pod \"kube-proxy-mcmt4\" (UID: \"55e440a5-54d6-41fe-9c5a-03729c0e1b0d\") " pod="kube-system/kube-proxy-mcmt4" Dec 12 17:41:31.283097 kubelet[3460]: I1212 17:41:31.283099 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55e440a5-54d6-41fe-9c5a-03729c0e1b0d-xtables-lock\") pod \"kube-proxy-mcmt4\" (UID: \"55e440a5-54d6-41fe-9c5a-03729c0e1b0d\") " pod="kube-system/kube-proxy-mcmt4" Dec 12 17:41:31.283244 kubelet[3460]: I1212 17:41:31.283114 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55e440a5-54d6-41fe-9c5a-03729c0e1b0d-lib-modules\") pod \"kube-proxy-mcmt4\" (UID: \"55e440a5-54d6-41fe-9c5a-03729c0e1b0d\") " pod="kube-system/kube-proxy-mcmt4" Dec 12 17:41:31.283244 kubelet[3460]: I1212 17:41:31.283125 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx2p7\" (UniqueName: \"kubernetes.io/projected/55e440a5-54d6-41fe-9c5a-03729c0e1b0d-kube-api-access-wx2p7\") pod \"kube-proxy-mcmt4\" (UID: \"55e440a5-54d6-41fe-9c5a-03729c0e1b0d\") " pod="kube-system/kube-proxy-mcmt4" Dec 12 17:41:31.523786 containerd[1929]: time="2025-12-12T17:41:31.523386058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mcmt4,Uid:55e440a5-54d6-41fe-9c5a-03729c0e1b0d,Namespace:kube-system,Attempt:0,}" Dec 12 17:41:31.567776 containerd[1929]: time="2025-12-12T17:41:31.567714155Z" level=info msg="connecting to shim 485aa344376773e13b4235e987d14843d33621ca6aa867f4cf44c1637d5c54d1" address="unix:///run/containerd/s/0a40d9916e3cb6aba5543ab0e403d4c60cf4bc2fac45df165a87a0d41e336466" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:31.592339 systemd[1]: Started cri-containerd-485aa344376773e13b4235e987d14843d33621ca6aa867f4cf44c1637d5c54d1.scope - libcontainer container 485aa344376773e13b4235e987d14843d33621ca6aa867f4cf44c1637d5c54d1. Dec 12 17:41:31.619510 containerd[1929]: time="2025-12-12T17:41:31.619480531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mcmt4,Uid:55e440a5-54d6-41fe-9c5a-03729c0e1b0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"485aa344376773e13b4235e987d14843d33621ca6aa867f4cf44c1637d5c54d1\"" Dec 12 17:41:31.628570 containerd[1929]: time="2025-12-12T17:41:31.628537303Z" level=info msg="CreateContainer within sandbox \"485aa344376773e13b4235e987d14843d33621ca6aa867f4cf44c1637d5c54d1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:41:31.668748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3061263619.mount: Deactivated successfully. Dec 12 17:41:31.670975 containerd[1929]: time="2025-12-12T17:41:31.669669558Z" level=info msg="Container 0db702a7ec0ba3f66d8a09789d0ee9087021604af2ce87c22875683a7b7e50e3: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:31.692416 containerd[1929]: time="2025-12-12T17:41:31.692377115Z" level=info msg="CreateContainer within sandbox \"485aa344376773e13b4235e987d14843d33621ca6aa867f4cf44c1637d5c54d1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0db702a7ec0ba3f66d8a09789d0ee9087021604af2ce87c22875683a7b7e50e3\"" Dec 12 17:41:31.692856 containerd[1929]: time="2025-12-12T17:41:31.692829798Z" level=info msg="StartContainer for \"0db702a7ec0ba3f66d8a09789d0ee9087021604af2ce87c22875683a7b7e50e3\"" Dec 12 17:41:31.694332 containerd[1929]: time="2025-12-12T17:41:31.694306752Z" level=info msg="connecting to shim 0db702a7ec0ba3f66d8a09789d0ee9087021604af2ce87c22875683a7b7e50e3" address="unix:///run/containerd/s/0a40d9916e3cb6aba5543ab0e403d4c60cf4bc2fac45df165a87a0d41e336466" protocol=ttrpc version=3 Dec 12 17:41:31.694899 systemd[1]: Created slice kubepods-besteffort-pod1ef1c77f_b47b_4d73_8c89_60769253b172.slice - libcontainer container kubepods-besteffort-pod1ef1c77f_b47b_4d73_8c89_60769253b172.slice. Dec 12 17:41:31.712352 systemd[1]: Started cri-containerd-0db702a7ec0ba3f66d8a09789d0ee9087021604af2ce87c22875683a7b7e50e3.scope - libcontainer container 0db702a7ec0ba3f66d8a09789d0ee9087021604af2ce87c22875683a7b7e50e3. Dec 12 17:41:31.762636 containerd[1929]: time="2025-12-12T17:41:31.762584492Z" level=info msg="StartContainer for \"0db702a7ec0ba3f66d8a09789d0ee9087021604af2ce87c22875683a7b7e50e3\" returns successfully" Dec 12 17:41:31.785318 kubelet[3460]: I1212 17:41:31.784972 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhgzq\" (UniqueName: \"kubernetes.io/projected/1ef1c77f-b47b-4d73-8c89-60769253b172-kube-api-access-hhgzq\") pod \"tigera-operator-7dcd859c48-gqf62\" (UID: \"1ef1c77f-b47b-4d73-8c89-60769253b172\") " pod="tigera-operator/tigera-operator-7dcd859c48-gqf62" Dec 12 17:41:31.785318 kubelet[3460]: I1212 17:41:31.785008 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1ef1c77f-b47b-4d73-8c89-60769253b172-var-lib-calico\") pod \"tigera-operator-7dcd859c48-gqf62\" (UID: \"1ef1c77f-b47b-4d73-8c89-60769253b172\") " pod="tigera-operator/tigera-operator-7dcd859c48-gqf62" Dec 12 17:41:31.911104 kubelet[3460]: I1212 17:41:31.910998 3460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mcmt4" podStartSLOduration=0.910984106 podStartE2EDuration="910.984106ms" podCreationTimestamp="2025-12-12 17:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:41:31.910926753 +0000 UTC m=+6.146344384" watchObservedRunningTime="2025-12-12 17:41:31.910984106 +0000 UTC m=+6.146401745" Dec 12 17:41:31.999344 containerd[1929]: time="2025-12-12T17:41:31.999305349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gqf62,Uid:1ef1c77f-b47b-4d73-8c89-60769253b172,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:41:32.040350 containerd[1929]: time="2025-12-12T17:41:32.040225758Z" level=info msg="connecting to shim 9a32a849f8ca9f604170f5c4f78dd05a0f50989f9ed24f630b5fbe4bed7f0c80" address="unix:///run/containerd/s/d2d1a51998cc6b9dfab12afe148a1df006222d69566dc48cd6ab36cdd4f6ae52" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:32.059304 systemd[1]: Started cri-containerd-9a32a849f8ca9f604170f5c4f78dd05a0f50989f9ed24f630b5fbe4bed7f0c80.scope - libcontainer container 9a32a849f8ca9f604170f5c4f78dd05a0f50989f9ed24f630b5fbe4bed7f0c80. Dec 12 17:41:32.086865 containerd[1929]: time="2025-12-12T17:41:32.086831765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gqf62,Uid:1ef1c77f-b47b-4d73-8c89-60769253b172,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9a32a849f8ca9f604170f5c4f78dd05a0f50989f9ed24f630b5fbe4bed7f0c80\"" Dec 12 17:41:32.088827 containerd[1929]: time="2025-12-12T17:41:32.088716649Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:41:33.965817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3203263146.mount: Deactivated successfully. Dec 12 17:41:35.557209 containerd[1929]: time="2025-12-12T17:41:35.557121148Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:35.562578 containerd[1929]: time="2025-12-12T17:41:35.562546989Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 12 17:41:35.565817 containerd[1929]: time="2025-12-12T17:41:35.565777410Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:35.570417 containerd[1929]: time="2025-12-12T17:41:35.570379616Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:35.570999 containerd[1929]: time="2025-12-12T17:41:35.570652591Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.481913492s" Dec 12 17:41:35.570999 containerd[1929]: time="2025-12-12T17:41:35.570678343Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:41:35.578981 containerd[1929]: time="2025-12-12T17:41:35.578951308Z" level=info msg="CreateContainer within sandbox \"9a32a849f8ca9f604170f5c4f78dd05a0f50989f9ed24f630b5fbe4bed7f0c80\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:41:35.598344 containerd[1929]: time="2025-12-12T17:41:35.598314841Z" level=info msg="Container 49c89fecffb1d845eae94b70f0343399a69c18c4a462988aff7f25bfd8eb8bb0: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:35.619334 containerd[1929]: time="2025-12-12T17:41:35.619305757Z" level=info msg="CreateContainer within sandbox \"9a32a849f8ca9f604170f5c4f78dd05a0f50989f9ed24f630b5fbe4bed7f0c80\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"49c89fecffb1d845eae94b70f0343399a69c18c4a462988aff7f25bfd8eb8bb0\"" Dec 12 17:41:35.620726 containerd[1929]: time="2025-12-12T17:41:35.620690694Z" level=info msg="StartContainer for \"49c89fecffb1d845eae94b70f0343399a69c18c4a462988aff7f25bfd8eb8bb0\"" Dec 12 17:41:35.621487 containerd[1929]: time="2025-12-12T17:41:35.621427199Z" level=info msg="connecting to shim 49c89fecffb1d845eae94b70f0343399a69c18c4a462988aff7f25bfd8eb8bb0" address="unix:///run/containerd/s/d2d1a51998cc6b9dfab12afe148a1df006222d69566dc48cd6ab36cdd4f6ae52" protocol=ttrpc version=3 Dec 12 17:41:35.638287 systemd[1]: Started cri-containerd-49c89fecffb1d845eae94b70f0343399a69c18c4a462988aff7f25bfd8eb8bb0.scope - libcontainer container 49c89fecffb1d845eae94b70f0343399a69c18c4a462988aff7f25bfd8eb8bb0. Dec 12 17:41:35.666254 containerd[1929]: time="2025-12-12T17:41:35.666218234Z" level=info msg="StartContainer for \"49c89fecffb1d845eae94b70f0343399a69c18c4a462988aff7f25bfd8eb8bb0\" returns successfully" Dec 12 17:41:35.924256 kubelet[3460]: I1212 17:41:35.924016 3460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-gqf62" podStartSLOduration=1.440669046 podStartE2EDuration="4.924001532s" podCreationTimestamp="2025-12-12 17:41:31 +0000 UTC" firstStartedPulling="2025-12-12 17:41:32.088271751 +0000 UTC m=+6.323689390" lastFinishedPulling="2025-12-12 17:41:35.571604245 +0000 UTC m=+9.807021876" observedRunningTime="2025-12-12 17:41:35.923909898 +0000 UTC m=+10.159327529" watchObservedRunningTime="2025-12-12 17:41:35.924001532 +0000 UTC m=+10.159419163" Dec 12 17:41:40.619070 sudo[2368]: pam_unix(sudo:session): session closed for user root Dec 12 17:41:40.697650 sshd[2367]: Connection closed by 10.200.16.10 port 33564 Dec 12 17:41:40.698367 sshd-session[2364]: pam_unix(sshd:session): session closed for user core Dec 12 17:41:40.702881 systemd[1]: sshd@6-10.200.20.14:22-10.200.16.10:33564.service: Deactivated successfully. Dec 12 17:41:40.704594 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:41:40.704982 systemd[1]: session-9.scope: Consumed 4.408s CPU time, 221.9M memory peak. Dec 12 17:41:40.707013 systemd-logind[1864]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:41:40.708000 systemd-logind[1864]: Removed session 9. Dec 12 17:41:45.335445 systemd[1]: Created slice kubepods-besteffort-pode4b8f5e3_3f86_462e_9bac_bcb5ecc7be06.slice - libcontainer container kubepods-besteffort-pode4b8f5e3_3f86_462e_9bac_bcb5ecc7be06.slice. Dec 12 17:41:45.360954 kubelet[3460]: I1212 17:41:45.360889 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kkwp\" (UniqueName: \"kubernetes.io/projected/e4b8f5e3-3f86-462e-9bac-bcb5ecc7be06-kube-api-access-5kkwp\") pod \"calico-typha-59766dc6fc-l8zd5\" (UID: \"e4b8f5e3-3f86-462e-9bac-bcb5ecc7be06\") " pod="calico-system/calico-typha-59766dc6fc-l8zd5" Dec 12 17:41:45.360954 kubelet[3460]: I1212 17:41:45.360922 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4b8f5e3-3f86-462e-9bac-bcb5ecc7be06-tigera-ca-bundle\") pod \"calico-typha-59766dc6fc-l8zd5\" (UID: \"e4b8f5e3-3f86-462e-9bac-bcb5ecc7be06\") " pod="calico-system/calico-typha-59766dc6fc-l8zd5" Dec 12 17:41:45.360954 kubelet[3460]: I1212 17:41:45.360935 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e4b8f5e3-3f86-462e-9bac-bcb5ecc7be06-typha-certs\") pod \"calico-typha-59766dc6fc-l8zd5\" (UID: \"e4b8f5e3-3f86-462e-9bac-bcb5ecc7be06\") " pod="calico-system/calico-typha-59766dc6fc-l8zd5" Dec 12 17:41:45.536585 systemd[1]: Created slice kubepods-besteffort-pod09c18757_e717_4683_aa08_ca2b10d389e6.slice - libcontainer container kubepods-besteffort-pod09c18757_e717_4683_aa08_ca2b10d389e6.slice. Dec 12 17:41:45.563016 kubelet[3460]: I1212 17:41:45.562791 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kns4z\" (UniqueName: \"kubernetes.io/projected/09c18757-e717-4683-aa08-ca2b10d389e6-kube-api-access-kns4z\") pod \"calico-node-d78cr\" (UID: \"09c18757-e717-4683-aa08-ca2b10d389e6\") " pod="calico-system/calico-node-d78cr" Dec 12 17:41:45.563016 kubelet[3460]: I1212 17:41:45.562826 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/09c18757-e717-4683-aa08-ca2b10d389e6-flexvol-driver-host\") pod \"calico-node-d78cr\" (UID: \"09c18757-e717-4683-aa08-ca2b10d389e6\") " pod="calico-system/calico-node-d78cr" Dec 12 17:41:45.563016 kubelet[3460]: I1212 17:41:45.562839 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/09c18757-e717-4683-aa08-ca2b10d389e6-var-run-calico\") pod \"calico-node-d78cr\" (UID: \"09c18757-e717-4683-aa08-ca2b10d389e6\") " pod="calico-system/calico-node-d78cr" Dec 12 17:41:45.563016 kubelet[3460]: I1212 17:41:45.562850 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09c18757-e717-4683-aa08-ca2b10d389e6-xtables-lock\") pod \"calico-node-d78cr\" (UID: \"09c18757-e717-4683-aa08-ca2b10d389e6\") " pod="calico-system/calico-node-d78cr" Dec 12 17:41:45.563016 kubelet[3460]: I1212 17:41:45.562860 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/09c18757-e717-4683-aa08-ca2b10d389e6-var-lib-calico\") pod \"calico-node-d78cr\" (UID: \"09c18757-e717-4683-aa08-ca2b10d389e6\") " pod="calico-system/calico-node-d78cr" Dec 12 17:41:45.563246 kubelet[3460]: I1212 17:41:45.562870 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/09c18757-e717-4683-aa08-ca2b10d389e6-cni-bin-dir\") pod \"calico-node-d78cr\" (UID: \"09c18757-e717-4683-aa08-ca2b10d389e6\") " pod="calico-system/calico-node-d78cr" Dec 12 17:41:45.563246 kubelet[3460]: I1212 17:41:45.562880 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/09c18757-e717-4683-aa08-ca2b10d389e6-cni-log-dir\") pod \"calico-node-d78cr\" (UID: \"09c18757-e717-4683-aa08-ca2b10d389e6\") " pod="calico-system/calico-node-d78cr" Dec 12 17:41:45.563246 kubelet[3460]: I1212 17:41:45.562901 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/09c18757-e717-4683-aa08-ca2b10d389e6-cni-net-dir\") pod \"calico-node-d78cr\" (UID: \"09c18757-e717-4683-aa08-ca2b10d389e6\") " pod="calico-system/calico-node-d78cr" Dec 12 17:41:45.563246 kubelet[3460]: I1212 17:41:45.562911 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09c18757-e717-4683-aa08-ca2b10d389e6-lib-modules\") pod \"calico-node-d78cr\" (UID: \"09c18757-e717-4683-aa08-ca2b10d389e6\") " pod="calico-system/calico-node-d78cr" Dec 12 17:41:45.563246 kubelet[3460]: I1212 17:41:45.562919 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/09c18757-e717-4683-aa08-ca2b10d389e6-policysync\") pod \"calico-node-d78cr\" (UID: \"09c18757-e717-4683-aa08-ca2b10d389e6\") " pod="calico-system/calico-node-d78cr" Dec 12 17:41:45.563321 kubelet[3460]: I1212 17:41:45.562927 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09c18757-e717-4683-aa08-ca2b10d389e6-tigera-ca-bundle\") pod \"calico-node-d78cr\" (UID: \"09c18757-e717-4683-aa08-ca2b10d389e6\") " pod="calico-system/calico-node-d78cr" Dec 12 17:41:45.563321 kubelet[3460]: I1212 17:41:45.562946 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/09c18757-e717-4683-aa08-ca2b10d389e6-node-certs\") pod \"calico-node-d78cr\" (UID: \"09c18757-e717-4683-aa08-ca2b10d389e6\") " pod="calico-system/calico-node-d78cr" Dec 12 17:41:45.643454 containerd[1929]: time="2025-12-12T17:41:45.643023559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59766dc6fc-l8zd5,Uid:e4b8f5e3-3f86-462e-9bac-bcb5ecc7be06,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:45.664185 kubelet[3460]: E1212 17:41:45.664130 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.664185 kubelet[3460]: W1212 17:41:45.664148 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.664406 kubelet[3460]: E1212 17:41:45.664170 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.664504 kubelet[3460]: E1212 17:41:45.664487 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.664504 kubelet[3460]: W1212 17:41:45.664502 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.664568 kubelet[3460]: E1212 17:41:45.664514 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.664756 kubelet[3460]: E1212 17:41:45.664740 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.664870 kubelet[3460]: W1212 17:41:45.664850 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.664906 kubelet[3460]: E1212 17:41:45.664871 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.665374 kubelet[3460]: E1212 17:41:45.665356 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.665550 kubelet[3460]: W1212 17:41:45.665371 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.665550 kubelet[3460]: E1212 17:41:45.665387 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.665550 kubelet[3460]: E1212 17:41:45.665518 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.665550 kubelet[3460]: W1212 17:41:45.665525 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.665550 kubelet[3460]: E1212 17:41:45.665534 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.665642 kubelet[3460]: E1212 17:41:45.665624 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.665642 kubelet[3460]: W1212 17:41:45.665629 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.665642 kubelet[3460]: E1212 17:41:45.665635 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.665953 kubelet[3460]: E1212 17:41:45.665712 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.665953 kubelet[3460]: W1212 17:41:45.665722 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.665953 kubelet[3460]: E1212 17:41:45.665728 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.665953 kubelet[3460]: E1212 17:41:45.665798 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.665953 kubelet[3460]: W1212 17:41:45.665802 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.665953 kubelet[3460]: E1212 17:41:45.665807 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.666254 kubelet[3460]: E1212 17:41:45.666209 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.666254 kubelet[3460]: W1212 17:41:45.666222 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.666254 kubelet[3460]: E1212 17:41:45.666234 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.666616 kubelet[3460]: E1212 17:41:45.666548 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.666616 kubelet[3460]: W1212 17:41:45.666563 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.666616 kubelet[3460]: E1212 17:41:45.666575 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.666921 kubelet[3460]: E1212 17:41:45.666882 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.666921 kubelet[3460]: W1212 17:41:45.666896 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.666921 kubelet[3460]: E1212 17:41:45.666916 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.667394 kubelet[3460]: E1212 17:41:45.667280 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.667394 kubelet[3460]: W1212 17:41:45.667290 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.667394 kubelet[3460]: E1212 17:41:45.667301 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.668480 kubelet[3460]: E1212 17:41:45.667452 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.668480 kubelet[3460]: W1212 17:41:45.667463 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.668480 kubelet[3460]: E1212 17:41:45.667472 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.668480 kubelet[3460]: E1212 17:41:45.667581 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.668480 kubelet[3460]: W1212 17:41:45.667587 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.668480 kubelet[3460]: E1212 17:41:45.667609 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.668480 kubelet[3460]: E1212 17:41:45.667784 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.668480 kubelet[3460]: W1212 17:41:45.667791 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.668480 kubelet[3460]: E1212 17:41:45.667799 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.668480 kubelet[3460]: E1212 17:41:45.667937 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.668667 kubelet[3460]: W1212 17:41:45.667944 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.668667 kubelet[3460]: E1212 17:41:45.667951 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.668667 kubelet[3460]: E1212 17:41:45.668075 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.668667 kubelet[3460]: W1212 17:41:45.668083 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.668667 kubelet[3460]: E1212 17:41:45.668090 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.668667 kubelet[3460]: E1212 17:41:45.668225 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.668667 kubelet[3460]: W1212 17:41:45.668232 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.668667 kubelet[3460]: E1212 17:41:45.668239 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.668667 kubelet[3460]: E1212 17:41:45.668371 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.668667 kubelet[3460]: W1212 17:41:45.668378 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.668799 kubelet[3460]: E1212 17:41:45.668394 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.668799 kubelet[3460]: E1212 17:41:45.668723 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.668799 kubelet[3460]: W1212 17:41:45.668734 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.668799 kubelet[3460]: E1212 17:41:45.668744 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.670244 kubelet[3460]: E1212 17:41:45.669108 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.670244 kubelet[3460]: W1212 17:41:45.669123 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.670244 kubelet[3460]: E1212 17:41:45.669134 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.670244 kubelet[3460]: E1212 17:41:45.669359 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.670244 kubelet[3460]: W1212 17:41:45.669368 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.670244 kubelet[3460]: E1212 17:41:45.669377 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.670244 kubelet[3460]: E1212 17:41:45.669499 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.670244 kubelet[3460]: W1212 17:41:45.669505 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.670244 kubelet[3460]: E1212 17:41:45.669513 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.670244 kubelet[3460]: E1212 17:41:45.669618 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.670810 kubelet[3460]: W1212 17:41:45.669636 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.670810 kubelet[3460]: E1212 17:41:45.669645 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.670810 kubelet[3460]: E1212 17:41:45.669743 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.670810 kubelet[3460]: W1212 17:41:45.669750 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.670810 kubelet[3460]: E1212 17:41:45.669756 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.670810 kubelet[3460]: E1212 17:41:45.669897 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.670810 kubelet[3460]: W1212 17:41:45.669904 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.670810 kubelet[3460]: E1212 17:41:45.669910 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.670810 kubelet[3460]: E1212 17:41:45.670041 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.670810 kubelet[3460]: W1212 17:41:45.670048 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.670956 kubelet[3460]: E1212 17:41:45.670056 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.670956 kubelet[3460]: E1212 17:41:45.670485 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.670956 kubelet[3460]: W1212 17:41:45.670500 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.670956 kubelet[3460]: E1212 17:41:45.670513 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.670956 kubelet[3460]: E1212 17:41:45.670652 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.670956 kubelet[3460]: W1212 17:41:45.670658 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.670956 kubelet[3460]: E1212 17:41:45.670665 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.670956 kubelet[3460]: E1212 17:41:45.670792 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.670956 kubelet[3460]: W1212 17:41:45.670814 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.670956 kubelet[3460]: E1212 17:41:45.670822 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.671932 kubelet[3460]: E1212 17:41:45.671660 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.671932 kubelet[3460]: W1212 17:41:45.671669 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.671932 kubelet[3460]: E1212 17:41:45.671679 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.672021 kubelet[3460]: E1212 17:41:45.671948 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.672021 kubelet[3460]: W1212 17:41:45.671956 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.672021 kubelet[3460]: E1212 17:41:45.671965 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.673532 kubelet[3460]: E1212 17:41:45.672346 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.673532 kubelet[3460]: W1212 17:41:45.672357 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.673532 kubelet[3460]: E1212 17:41:45.672380 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.673532 kubelet[3460]: E1212 17:41:45.672635 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.673532 kubelet[3460]: W1212 17:41:45.672643 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.673532 kubelet[3460]: E1212 17:41:45.672652 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.678273 kubelet[3460]: E1212 17:41:45.678049 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.678273 kubelet[3460]: W1212 17:41:45.678063 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.678273 kubelet[3460]: E1212 17:41:45.678074 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.682467 kubelet[3460]: E1212 17:41:45.682382 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.682467 kubelet[3460]: W1212 17:41:45.682396 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.682467 kubelet[3460]: E1212 17:41:45.682407 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.682818 kubelet[3460]: E1212 17:41:45.682801 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.682818 kubelet[3460]: W1212 17:41:45.682814 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.682882 kubelet[3460]: E1212 17:41:45.682825 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.696375 containerd[1929]: time="2025-12-12T17:41:45.696318680Z" level=info msg="connecting to shim 357b1fc3c6ab97a98683eccf40544cf7a703a7bdc503dd5f771713e5017ad318" address="unix:///run/containerd/s/248c8b4ce58490eaeb24c06c5c85290ff4355b29784ab6cc923e91d39dd26857" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:45.718487 systemd[1]: Started cri-containerd-357b1fc3c6ab97a98683eccf40544cf7a703a7bdc503dd5f771713e5017ad318.scope - libcontainer container 357b1fc3c6ab97a98683eccf40544cf7a703a7bdc503dd5f771713e5017ad318. Dec 12 17:41:45.734359 kubelet[3460]: E1212 17:41:45.734335 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:41:45.750431 kubelet[3460]: E1212 17:41:45.749919 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.750431 kubelet[3460]: W1212 17:41:45.750305 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.750431 kubelet[3460]: E1212 17:41:45.750323 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.751993 kubelet[3460]: E1212 17:41:45.751672 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.751993 kubelet[3460]: W1212 17:41:45.751781 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.752461 kubelet[3460]: E1212 17:41:45.751820 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.752892 kubelet[3460]: E1212 17:41:45.752734 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.752892 kubelet[3460]: W1212 17:41:45.752745 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.752892 kubelet[3460]: E1212 17:41:45.752756 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.753651 kubelet[3460]: E1212 17:41:45.753523 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.753651 kubelet[3460]: W1212 17:41:45.753550 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.753651 kubelet[3460]: E1212 17:41:45.753560 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.754543 kubelet[3460]: E1212 17:41:45.754433 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.754543 kubelet[3460]: W1212 17:41:45.754444 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.754830 kubelet[3460]: E1212 17:41:45.754455 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.755482 kubelet[3460]: E1212 17:41:45.755468 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.755775 kubelet[3460]: W1212 17:41:45.755568 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.755775 kubelet[3460]: E1212 17:41:45.755722 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.756629 kubelet[3460]: E1212 17:41:45.756319 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.756629 kubelet[3460]: W1212 17:41:45.756466 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.756629 kubelet[3460]: E1212 17:41:45.756478 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.757305 kubelet[3460]: E1212 17:41:45.757263 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.757581 kubelet[3460]: W1212 17:41:45.757457 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.757581 kubelet[3460]: E1212 17:41:45.757474 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.758068 kubelet[3460]: E1212 17:41:45.758023 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.758290 kubelet[3460]: W1212 17:41:45.758165 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.758290 kubelet[3460]: E1212 17:41:45.758241 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.758558 kubelet[3460]: E1212 17:41:45.758547 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.758707 kubelet[3460]: W1212 17:41:45.758632 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.758707 kubelet[3460]: E1212 17:41:45.758646 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.758893 kubelet[3460]: E1212 17:41:45.758883 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.759033 kubelet[3460]: W1212 17:41:45.758939 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.759033 kubelet[3460]: E1212 17:41:45.758951 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.759392 kubelet[3460]: E1212 17:41:45.759372 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.759590 kubelet[3460]: W1212 17:41:45.759470 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.759590 kubelet[3460]: E1212 17:41:45.759485 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.760112 kubelet[3460]: E1212 17:41:45.760015 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.760112 kubelet[3460]: W1212 17:41:45.760027 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.760112 kubelet[3460]: E1212 17:41:45.760044 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.760475 kubelet[3460]: E1212 17:41:45.760462 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.760624 kubelet[3460]: W1212 17:41:45.760578 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.760723 kubelet[3460]: E1212 17:41:45.760665 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.760974 kubelet[3460]: E1212 17:41:45.760962 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.761124 kubelet[3460]: W1212 17:41:45.761021 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.761124 kubelet[3460]: E1212 17:41:45.761033 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.761442 kubelet[3460]: E1212 17:41:45.761431 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.761583 kubelet[3460]: W1212 17:41:45.761504 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.761583 kubelet[3460]: E1212 17:41:45.761518 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.761843 kubelet[3460]: E1212 17:41:45.761800 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.761843 kubelet[3460]: W1212 17:41:45.761810 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.761843 kubelet[3460]: E1212 17:41:45.761819 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.762263 kubelet[3460]: E1212 17:41:45.762197 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.762263 kubelet[3460]: W1212 17:41:45.762210 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.762520 kubelet[3460]: E1212 17:41:45.762355 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.762796 kubelet[3460]: E1212 17:41:45.762783 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.763130 kubelet[3460]: W1212 17:41:45.762833 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.763130 kubelet[3460]: E1212 17:41:45.762852 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.763581 kubelet[3460]: E1212 17:41:45.763567 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.763766 kubelet[3460]: W1212 17:41:45.763656 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.763766 kubelet[3460]: E1212 17:41:45.763671 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.764265 kubelet[3460]: E1212 17:41:45.764241 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.764496 kubelet[3460]: W1212 17:41:45.764321 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.764496 kubelet[3460]: E1212 17:41:45.764334 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.764496 kubelet[3460]: I1212 17:41:45.764400 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/af9257cb-fecf-4ff2-8249-41f13bb32168-kubelet-dir\") pod \"csi-node-driver-tbq92\" (UID: \"af9257cb-fecf-4ff2-8249-41f13bb32168\") " pod="calico-system/csi-node-driver-tbq92" Dec 12 17:41:45.764877 kubelet[3460]: E1212 17:41:45.764774 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.764877 kubelet[3460]: W1212 17:41:45.764804 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.764877 kubelet[3460]: E1212 17:41:45.764814 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.764877 kubelet[3460]: I1212 17:41:45.764833 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/af9257cb-fecf-4ff2-8249-41f13bb32168-socket-dir\") pod \"csi-node-driver-tbq92\" (UID: \"af9257cb-fecf-4ff2-8249-41f13bb32168\") " pod="calico-system/csi-node-driver-tbq92" Dec 12 17:41:45.764987 kubelet[3460]: E1212 17:41:45.764971 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.765003 kubelet[3460]: W1212 17:41:45.764983 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.765003 kubelet[3460]: E1212 17:41:45.764997 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.765204 kubelet[3460]: E1212 17:41:45.765190 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.765204 kubelet[3460]: W1212 17:41:45.765200 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.765263 kubelet[3460]: E1212 17:41:45.765210 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.765498 kubelet[3460]: E1212 17:41:45.765481 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.765498 kubelet[3460]: W1212 17:41:45.765493 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.765551 kubelet[3460]: E1212 17:41:45.765504 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.765551 kubelet[3460]: I1212 17:41:45.765521 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwcqq\" (UniqueName: \"kubernetes.io/projected/af9257cb-fecf-4ff2-8249-41f13bb32168-kube-api-access-vwcqq\") pod \"csi-node-driver-tbq92\" (UID: \"af9257cb-fecf-4ff2-8249-41f13bb32168\") " pod="calico-system/csi-node-driver-tbq92" Dec 12 17:41:45.766093 kubelet[3460]: E1212 17:41:45.766066 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.766093 kubelet[3460]: W1212 17:41:45.766082 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.766093 kubelet[3460]: E1212 17:41:45.766094 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.766295 kubelet[3460]: I1212 17:41:45.766116 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/af9257cb-fecf-4ff2-8249-41f13bb32168-registration-dir\") pod \"csi-node-driver-tbq92\" (UID: \"af9257cb-fecf-4ff2-8249-41f13bb32168\") " pod="calico-system/csi-node-driver-tbq92" Dec 12 17:41:45.766608 kubelet[3460]: E1212 17:41:45.766591 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.766608 kubelet[3460]: W1212 17:41:45.766604 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.766755 kubelet[3460]: E1212 17:41:45.766614 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.766755 kubelet[3460]: I1212 17:41:45.766677 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/af9257cb-fecf-4ff2-8249-41f13bb32168-varrun\") pod \"csi-node-driver-tbq92\" (UID: \"af9257cb-fecf-4ff2-8249-41f13bb32168\") " pod="calico-system/csi-node-driver-tbq92" Dec 12 17:41:45.766870 kubelet[3460]: E1212 17:41:45.766855 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.766870 kubelet[3460]: W1212 17:41:45.766865 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.766958 kubelet[3460]: E1212 17:41:45.766873 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.767049 kubelet[3460]: E1212 17:41:45.767002 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.767049 kubelet[3460]: W1212 17:41:45.767008 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.767049 kubelet[3460]: E1212 17:41:45.767015 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.767438 kubelet[3460]: E1212 17:41:45.767423 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.767438 kubelet[3460]: W1212 17:41:45.767434 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.768093 kubelet[3460]: E1212 17:41:45.767445 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.768237 kubelet[3460]: E1212 17:41:45.768171 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.768237 kubelet[3460]: W1212 17:41:45.768206 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.768237 kubelet[3460]: E1212 17:41:45.768217 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.768422 kubelet[3460]: E1212 17:41:45.768362 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.768422 kubelet[3460]: W1212 17:41:45.768371 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.768422 kubelet[3460]: E1212 17:41:45.768380 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.768647 kubelet[3460]: E1212 17:41:45.768500 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.768647 kubelet[3460]: W1212 17:41:45.768506 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.768647 kubelet[3460]: E1212 17:41:45.768513 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.768819 kubelet[3460]: E1212 17:41:45.768651 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.768819 kubelet[3460]: W1212 17:41:45.768660 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.768819 kubelet[3460]: E1212 17:41:45.768670 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.768819 kubelet[3460]: E1212 17:41:45.768774 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.768819 kubelet[3460]: W1212 17:41:45.768779 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.768819 kubelet[3460]: E1212 17:41:45.768784 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.774700 containerd[1929]: time="2025-12-12T17:41:45.774667860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59766dc6fc-l8zd5,Uid:e4b8f5e3-3f86-462e-9bac-bcb5ecc7be06,Namespace:calico-system,Attempt:0,} returns sandbox id \"357b1fc3c6ab97a98683eccf40544cf7a703a7bdc503dd5f771713e5017ad318\"" Dec 12 17:41:45.776253 containerd[1929]: time="2025-12-12T17:41:45.776202704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:41:45.839969 containerd[1929]: time="2025-12-12T17:41:45.839946149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d78cr,Uid:09c18757-e717-4683-aa08-ca2b10d389e6,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:45.867097 kubelet[3460]: E1212 17:41:45.867081 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.867241 kubelet[3460]: W1212 17:41:45.867170 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.867241 kubelet[3460]: E1212 17:41:45.867206 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.867542 kubelet[3460]: E1212 17:41:45.867484 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.867542 kubelet[3460]: W1212 17:41:45.867494 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.867542 kubelet[3460]: E1212 17:41:45.867504 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.867672 kubelet[3460]: E1212 17:41:45.867652 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.867672 kubelet[3460]: W1212 17:41:45.867667 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.867785 kubelet[3460]: E1212 17:41:45.867679 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.867877 kubelet[3460]: E1212 17:41:45.867864 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.867877 kubelet[3460]: W1212 17:41:45.867873 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.867877 kubelet[3460]: E1212 17:41:45.867880 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.868000 kubelet[3460]: E1212 17:41:45.867982 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.868070 kubelet[3460]: W1212 17:41:45.868002 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.868070 kubelet[3460]: E1212 17:41:45.868009 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.868320 kubelet[3460]: E1212 17:41:45.868311 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.868467 kubelet[3460]: W1212 17:41:45.868348 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.868467 kubelet[3460]: E1212 17:41:45.868359 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.868676 kubelet[3460]: E1212 17:41:45.868635 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.868676 kubelet[3460]: W1212 17:41:45.868645 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.868676 kubelet[3460]: E1212 17:41:45.868654 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.868842 kubelet[3460]: E1212 17:41:45.868823 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.868842 kubelet[3460]: W1212 17:41:45.868833 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.868978 kubelet[3460]: E1212 17:41:45.868844 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.869059 kubelet[3460]: E1212 17:41:45.869043 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.869059 kubelet[3460]: W1212 17:41:45.869056 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.869136 kubelet[3460]: E1212 17:41:45.869066 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.869393 kubelet[3460]: E1212 17:41:45.869374 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.869393 kubelet[3460]: W1212 17:41:45.869386 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.869393 kubelet[3460]: E1212 17:41:45.869396 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.870494 kubelet[3460]: E1212 17:41:45.870460 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.870494 kubelet[3460]: W1212 17:41:45.870471 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.870694 kubelet[3460]: E1212 17:41:45.870481 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.870925 kubelet[3460]: E1212 17:41:45.870873 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.870925 kubelet[3460]: W1212 17:41:45.870903 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.870925 kubelet[3460]: E1212 17:41:45.870915 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.871178 kubelet[3460]: E1212 17:41:45.871161 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.871290 kubelet[3460]: W1212 17:41:45.871262 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.871290 kubelet[3460]: E1212 17:41:45.871278 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.871561 kubelet[3460]: E1212 17:41:45.871542 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.871665 kubelet[3460]: W1212 17:41:45.871631 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.871665 kubelet[3460]: E1212 17:41:45.871644 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.872243 kubelet[3460]: E1212 17:41:45.872148 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.872243 kubelet[3460]: W1212 17:41:45.872218 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.872243 kubelet[3460]: E1212 17:41:45.872231 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.872878 kubelet[3460]: E1212 17:41:45.872757 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.872878 kubelet[3460]: W1212 17:41:45.872770 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.872878 kubelet[3460]: E1212 17:41:45.872781 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.873255 kubelet[3460]: E1212 17:41:45.873089 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.873255 kubelet[3460]: W1212 17:41:45.873099 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.873255 kubelet[3460]: E1212 17:41:45.873108 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.873760 kubelet[3460]: E1212 17:41:45.873678 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.873760 kubelet[3460]: W1212 17:41:45.873690 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.873760 kubelet[3460]: E1212 17:41:45.873703 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.874042 kubelet[3460]: E1212 17:41:45.873872 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.874042 kubelet[3460]: W1212 17:41:45.873880 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.874042 kubelet[3460]: E1212 17:41:45.873889 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.875390 kubelet[3460]: E1212 17:41:45.874954 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.875390 kubelet[3460]: W1212 17:41:45.874968 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.875390 kubelet[3460]: E1212 17:41:45.874979 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.875924 kubelet[3460]: E1212 17:41:45.875911 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.876207 kubelet[3460]: W1212 17:41:45.876170 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.877329 kubelet[3460]: E1212 17:41:45.876914 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.877763 kubelet[3460]: E1212 17:41:45.877662 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.877763 kubelet[3460]: W1212 17:41:45.877676 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.877763 kubelet[3460]: E1212 17:41:45.877687 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.879247 kubelet[3460]: E1212 17:41:45.879230 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.879395 kubelet[3460]: W1212 17:41:45.879325 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.879395 kubelet[3460]: E1212 17:41:45.879341 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.880139 kubelet[3460]: E1212 17:41:45.879852 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.880139 kubelet[3460]: W1212 17:41:45.879865 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.880139 kubelet[3460]: E1212 17:41:45.879875 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.880254 containerd[1929]: time="2025-12-12T17:41:45.879779355Z" level=info msg="connecting to shim 405da88ef617562499cdc0ef4f4105b1f06561de425459c79dc6ff23b53b5fc6" address="unix:///run/containerd/s/029162bbff9d73283ca60bbb32948c5f4a0a09a1ddf9c2ade17b3b126dbb0c38" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:41:45.880834 kubelet[3460]: E1212 17:41:45.880475 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.880834 kubelet[3460]: W1212 17:41:45.880788 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.880834 kubelet[3460]: E1212 17:41:45.880807 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.887956 kubelet[3460]: E1212 17:41:45.887910 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:45.887956 kubelet[3460]: W1212 17:41:45.887923 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:45.887956 kubelet[3460]: E1212 17:41:45.887934 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:45.904315 systemd[1]: Started cri-containerd-405da88ef617562499cdc0ef4f4105b1f06561de425459c79dc6ff23b53b5fc6.scope - libcontainer container 405da88ef617562499cdc0ef4f4105b1f06561de425459c79dc6ff23b53b5fc6. Dec 12 17:41:45.947442 containerd[1929]: time="2025-12-12T17:41:45.947395931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d78cr,Uid:09c18757-e717-4683-aa08-ca2b10d389e6,Namespace:calico-system,Attempt:0,} returns sandbox id \"405da88ef617562499cdc0ef4f4105b1f06561de425459c79dc6ff23b53b5fc6\"" Dec 12 17:41:47.527731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount825739339.mount: Deactivated successfully. Dec 12 17:41:47.852969 kubelet[3460]: E1212 17:41:47.852860 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:41:48.050391 containerd[1929]: time="2025-12-12T17:41:48.050341973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:48.054345 containerd[1929]: time="2025-12-12T17:41:48.054314762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Dec 12 17:41:48.057401 containerd[1929]: time="2025-12-12T17:41:48.057372769Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:48.062204 containerd[1929]: time="2025-12-12T17:41:48.062144209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:48.062356 containerd[1929]: time="2025-12-12T17:41:48.062330189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.285943321s" Dec 12 17:41:48.062356 containerd[1929]: time="2025-12-12T17:41:48.062356054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:41:48.063931 containerd[1929]: time="2025-12-12T17:41:48.063904290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:41:48.078508 containerd[1929]: time="2025-12-12T17:41:48.078432751Z" level=info msg="CreateContainer within sandbox \"357b1fc3c6ab97a98683eccf40544cf7a703a7bdc503dd5f771713e5017ad318\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:41:48.100949 containerd[1929]: time="2025-12-12T17:41:48.100341112Z" level=info msg="Container 4466b2d45208c65b2b7797480dc5d14af2a2d845ba0f81a194507a078573d9e1: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:48.120530 containerd[1929]: time="2025-12-12T17:41:48.120419783Z" level=info msg="CreateContainer within sandbox \"357b1fc3c6ab97a98683eccf40544cf7a703a7bdc503dd5f771713e5017ad318\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4466b2d45208c65b2b7797480dc5d14af2a2d845ba0f81a194507a078573d9e1\"" Dec 12 17:41:48.121731 containerd[1929]: time="2025-12-12T17:41:48.121678284Z" level=info msg="StartContainer for \"4466b2d45208c65b2b7797480dc5d14af2a2d845ba0f81a194507a078573d9e1\"" Dec 12 17:41:48.123853 containerd[1929]: time="2025-12-12T17:41:48.123828574Z" level=info msg="connecting to shim 4466b2d45208c65b2b7797480dc5d14af2a2d845ba0f81a194507a078573d9e1" address="unix:///run/containerd/s/248c8b4ce58490eaeb24c06c5c85290ff4355b29784ab6cc923e91d39dd26857" protocol=ttrpc version=3 Dec 12 17:41:48.145333 systemd[1]: Started cri-containerd-4466b2d45208c65b2b7797480dc5d14af2a2d845ba0f81a194507a078573d9e1.scope - libcontainer container 4466b2d45208c65b2b7797480dc5d14af2a2d845ba0f81a194507a078573d9e1. Dec 12 17:41:48.181749 containerd[1929]: time="2025-12-12T17:41:48.181390675Z" level=info msg="StartContainer for \"4466b2d45208c65b2b7797480dc5d14af2a2d845ba0f81a194507a078573d9e1\" returns successfully" Dec 12 17:41:48.501794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571901000.mount: Deactivated successfully. Dec 12 17:41:48.957339 kubelet[3460]: I1212 17:41:48.957144 3460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-59766dc6fc-l8zd5" podStartSLOduration=1.669572456 podStartE2EDuration="3.957131911s" podCreationTimestamp="2025-12-12 17:41:45 +0000 UTC" firstStartedPulling="2025-12-12 17:41:45.775693196 +0000 UTC m=+20.011110835" lastFinishedPulling="2025-12-12 17:41:48.063252659 +0000 UTC m=+22.298670290" observedRunningTime="2025-12-12 17:41:48.956387054 +0000 UTC m=+23.191804757" watchObservedRunningTime="2025-12-12 17:41:48.957131911 +0000 UTC m=+23.192549558" Dec 12 17:41:48.981264 kubelet[3460]: E1212 17:41:48.981234 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.981264 kubelet[3460]: W1212 17:41:48.981255 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.981446 kubelet[3460]: E1212 17:41:48.981276 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.981446 kubelet[3460]: E1212 17:41:48.981418 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.981570 kubelet[3460]: W1212 17:41:48.981425 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.981570 kubelet[3460]: E1212 17:41:48.981460 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.981678 kubelet[3460]: E1212 17:41:48.981659 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.981678 kubelet[3460]: W1212 17:41:48.981674 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.981752 kubelet[3460]: E1212 17:41:48.981680 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.981802 kubelet[3460]: E1212 17:41:48.981792 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.981802 kubelet[3460]: W1212 17:41:48.981800 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.981843 kubelet[3460]: E1212 17:41:48.981824 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.981982 kubelet[3460]: E1212 17:41:48.981961 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.981982 kubelet[3460]: W1212 17:41:48.981980 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.982043 kubelet[3460]: E1212 17:41:48.981989 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.982124 kubelet[3460]: E1212 17:41:48.982115 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.982145 kubelet[3460]: W1212 17:41:48.982134 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.982145 kubelet[3460]: E1212 17:41:48.982143 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.982361 kubelet[3460]: E1212 17:41:48.982347 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.982361 kubelet[3460]: W1212 17:41:48.982358 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.982437 kubelet[3460]: E1212 17:41:48.982367 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.982493 kubelet[3460]: E1212 17:41:48.982481 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.982493 kubelet[3460]: W1212 17:41:48.982489 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.982599 kubelet[3460]: E1212 17:41:48.982495 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.982675 kubelet[3460]: E1212 17:41:48.982663 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.982675 kubelet[3460]: W1212 17:41:48.982671 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.982733 kubelet[3460]: E1212 17:41:48.982678 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.982776 kubelet[3460]: E1212 17:41:48.982767 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.982776 kubelet[3460]: W1212 17:41:48.982775 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.982879 kubelet[3460]: E1212 17:41:48.982781 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.982957 kubelet[3460]: E1212 17:41:48.982946 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.982957 kubelet[3460]: W1212 17:41:48.982953 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.983014 kubelet[3460]: E1212 17:41:48.982959 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.983057 kubelet[3460]: E1212 17:41:48.983049 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.983057 kubelet[3460]: W1212 17:41:48.983054 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.983150 kubelet[3460]: E1212 17:41:48.983061 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.983266 kubelet[3460]: E1212 17:41:48.983253 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.983266 kubelet[3460]: W1212 17:41:48.983262 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.983266 kubelet[3460]: E1212 17:41:48.983268 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.983363 kubelet[3460]: E1212 17:41:48.983354 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.983363 kubelet[3460]: W1212 17:41:48.983358 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.983481 kubelet[3460]: E1212 17:41:48.983365 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.983556 kubelet[3460]: E1212 17:41:48.983548 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.983556 kubelet[3460]: W1212 17:41:48.983553 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.983610 kubelet[3460]: E1212 17:41:48.983559 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.991809 kubelet[3460]: E1212 17:41:48.991747 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.991809 kubelet[3460]: W1212 17:41:48.991763 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.991809 kubelet[3460]: E1212 17:41:48.991777 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.991954 kubelet[3460]: E1212 17:41:48.991939 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.991954 kubelet[3460]: W1212 17:41:48.991950 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.992000 kubelet[3460]: E1212 17:41:48.991967 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.992144 kubelet[3460]: E1212 17:41:48.992129 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.992144 kubelet[3460]: W1212 17:41:48.992139 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.992227 kubelet[3460]: E1212 17:41:48.992147 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.992346 kubelet[3460]: E1212 17:41:48.992327 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.992346 kubelet[3460]: W1212 17:41:48.992341 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.992382 kubelet[3460]: E1212 17:41:48.992349 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.992522 kubelet[3460]: E1212 17:41:48.992509 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.992522 kubelet[3460]: W1212 17:41:48.992518 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.992582 kubelet[3460]: E1212 17:41:48.992525 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.992653 kubelet[3460]: E1212 17:41:48.992641 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.992653 kubelet[3460]: W1212 17:41:48.992649 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.992688 kubelet[3460]: E1212 17:41:48.992656 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.992844 kubelet[3460]: E1212 17:41:48.992830 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.992844 kubelet[3460]: W1212 17:41:48.992840 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.992896 kubelet[3460]: E1212 17:41:48.992848 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.993080 kubelet[3460]: E1212 17:41:48.993067 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.993188 kubelet[3460]: W1212 17:41:48.993138 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.993188 kubelet[3460]: E1212 17:41:48.993153 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.993420 kubelet[3460]: E1212 17:41:48.993402 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.993420 kubelet[3460]: W1212 17:41:48.993415 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.993482 kubelet[3460]: E1212 17:41:48.993423 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.993644 kubelet[3460]: E1212 17:41:48.993552 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.993644 kubelet[3460]: W1212 17:41:48.993563 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.993644 kubelet[3460]: E1212 17:41:48.993569 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.993778 kubelet[3460]: E1212 17:41:48.993766 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.993832 kubelet[3460]: W1212 17:41:48.993824 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.993887 kubelet[3460]: E1212 17:41:48.993875 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.994158 kubelet[3460]: E1212 17:41:48.994078 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.994158 kubelet[3460]: W1212 17:41:48.994087 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.994158 kubelet[3460]: E1212 17:41:48.994096 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.994333 kubelet[3460]: E1212 17:41:48.994322 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.994385 kubelet[3460]: W1212 17:41:48.994376 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.994441 kubelet[3460]: E1212 17:41:48.994422 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.994660 kubelet[3460]: E1212 17:41:48.994650 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.994898 kubelet[3460]: W1212 17:41:48.994720 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.994898 kubelet[3460]: E1212 17:41:48.994732 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.994968 kubelet[3460]: E1212 17:41:48.994910 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.994968 kubelet[3460]: W1212 17:41:48.994920 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.994968 kubelet[3460]: E1212 17:41:48.994928 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.995069 kubelet[3460]: E1212 17:41:48.995053 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.995069 kubelet[3460]: W1212 17:41:48.995063 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.995069 kubelet[3460]: E1212 17:41:48.995069 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.995205 kubelet[3460]: E1212 17:41:48.995171 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.995205 kubelet[3460]: W1212 17:41:48.995203 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.995247 kubelet[3460]: E1212 17:41:48.995210 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:48.995474 kubelet[3460]: E1212 17:41:48.995458 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:41:48.995474 kubelet[3460]: W1212 17:41:48.995469 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:41:48.995521 kubelet[3460]: E1212 17:41:48.995476 3460 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:41:49.252895 containerd[1929]: time="2025-12-12T17:41:49.252855976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:49.255898 containerd[1929]: time="2025-12-12T17:41:49.255853292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Dec 12 17:41:49.259600 containerd[1929]: time="2025-12-12T17:41:49.259555945Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:49.263857 containerd[1929]: time="2025-12-12T17:41:49.263815218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:49.264348 containerd[1929]: time="2025-12-12T17:41:49.264070568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.200139101s" Dec 12 17:41:49.264348 containerd[1929]: time="2025-12-12T17:41:49.264098824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:41:49.271905 containerd[1929]: time="2025-12-12T17:41:49.271884130Z" level=info msg="CreateContainer within sandbox \"405da88ef617562499cdc0ef4f4105b1f06561de425459c79dc6ff23b53b5fc6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:41:49.292903 containerd[1929]: time="2025-12-12T17:41:49.292871674Z" level=info msg="Container 9e5792bb778f2d35f38f7afc0da695ee74ac489e12d8a7f5b4d77451707be17c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:49.311973 containerd[1929]: time="2025-12-12T17:41:49.311941989Z" level=info msg="CreateContainer within sandbox \"405da88ef617562499cdc0ef4f4105b1f06561de425459c79dc6ff23b53b5fc6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9e5792bb778f2d35f38f7afc0da695ee74ac489e12d8a7f5b4d77451707be17c\"" Dec 12 17:41:49.312300 containerd[1929]: time="2025-12-12T17:41:49.312282445Z" level=info msg="StartContainer for \"9e5792bb778f2d35f38f7afc0da695ee74ac489e12d8a7f5b4d77451707be17c\"" Dec 12 17:41:49.313467 containerd[1929]: time="2025-12-12T17:41:49.313434367Z" level=info msg="connecting to shim 9e5792bb778f2d35f38f7afc0da695ee74ac489e12d8a7f5b4d77451707be17c" address="unix:///run/containerd/s/029162bbff9d73283ca60bbb32948c5f4a0a09a1ddf9c2ade17b3b126dbb0c38" protocol=ttrpc version=3 Dec 12 17:41:49.333324 systemd[1]: Started cri-containerd-9e5792bb778f2d35f38f7afc0da695ee74ac489e12d8a7f5b4d77451707be17c.scope - libcontainer container 9e5792bb778f2d35f38f7afc0da695ee74ac489e12d8a7f5b4d77451707be17c. Dec 12 17:41:49.390413 containerd[1929]: time="2025-12-12T17:41:49.390376724Z" level=info msg="StartContainer for \"9e5792bb778f2d35f38f7afc0da695ee74ac489e12d8a7f5b4d77451707be17c\" returns successfully" Dec 12 17:41:49.398486 systemd[1]: cri-containerd-9e5792bb778f2d35f38f7afc0da695ee74ac489e12d8a7f5b4d77451707be17c.scope: Deactivated successfully. Dec 12 17:41:49.400531 containerd[1929]: time="2025-12-12T17:41:49.400431154Z" level=info msg="received container exit event container_id:\"9e5792bb778f2d35f38f7afc0da695ee74ac489e12d8a7f5b4d77451707be17c\" id:\"9e5792bb778f2d35f38f7afc0da695ee74ac489e12d8a7f5b4d77451707be17c\" pid:4170 exited_at:{seconds:1765561309 nanos:400139211}" Dec 12 17:41:49.417024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e5792bb778f2d35f38f7afc0da695ee74ac489e12d8a7f5b4d77451707be17c-rootfs.mount: Deactivated successfully. Dec 12 17:41:49.853300 kubelet[3460]: E1212 17:41:49.853151 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:41:49.946703 kubelet[3460]: I1212 17:41:49.946674 3460 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:41:50.950924 containerd[1929]: time="2025-12-12T17:41:50.950895171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:41:51.852986 kubelet[3460]: E1212 17:41:51.852940 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:41:53.265098 containerd[1929]: time="2025-12-12T17:41:53.265051531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:53.267973 containerd[1929]: time="2025-12-12T17:41:53.267846843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 12 17:41:53.272405 containerd[1929]: time="2025-12-12T17:41:53.272232447Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:53.276260 containerd[1929]: time="2025-12-12T17:41:53.276231098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:41:53.276834 containerd[1929]: time="2025-12-12T17:41:53.276810871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.325534779s" Dec 12 17:41:53.276834 containerd[1929]: time="2025-12-12T17:41:53.276833328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:41:53.284146 containerd[1929]: time="2025-12-12T17:41:53.284112174Z" level=info msg="CreateContainer within sandbox \"405da88ef617562499cdc0ef4f4105b1f06561de425459c79dc6ff23b53b5fc6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:41:53.302451 containerd[1929]: time="2025-12-12T17:41:53.302427072Z" level=info msg="Container 598e6d02ddaf0c6c2255e9da9d8503fe6a32cd85a673c7375a72a8a97e11906a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:41:53.322018 containerd[1929]: time="2025-12-12T17:41:53.321926613Z" level=info msg="CreateContainer within sandbox \"405da88ef617562499cdc0ef4f4105b1f06561de425459c79dc6ff23b53b5fc6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"598e6d02ddaf0c6c2255e9da9d8503fe6a32cd85a673c7375a72a8a97e11906a\"" Dec 12 17:41:53.322957 containerd[1929]: time="2025-12-12T17:41:53.322927924Z" level=info msg="StartContainer for \"598e6d02ddaf0c6c2255e9da9d8503fe6a32cd85a673c7375a72a8a97e11906a\"" Dec 12 17:41:53.325075 containerd[1929]: time="2025-12-12T17:41:53.325048845Z" level=info msg="connecting to shim 598e6d02ddaf0c6c2255e9da9d8503fe6a32cd85a673c7375a72a8a97e11906a" address="unix:///run/containerd/s/029162bbff9d73283ca60bbb32948c5f4a0a09a1ddf9c2ade17b3b126dbb0c38" protocol=ttrpc version=3 Dec 12 17:41:53.344292 systemd[1]: Started cri-containerd-598e6d02ddaf0c6c2255e9da9d8503fe6a32cd85a673c7375a72a8a97e11906a.scope - libcontainer container 598e6d02ddaf0c6c2255e9da9d8503fe6a32cd85a673c7375a72a8a97e11906a. Dec 12 17:41:53.393123 containerd[1929]: time="2025-12-12T17:41:53.393079990Z" level=info msg="StartContainer for \"598e6d02ddaf0c6c2255e9da9d8503fe6a32cd85a673c7375a72a8a97e11906a\" returns successfully" Dec 12 17:41:53.854050 kubelet[3460]: E1212 17:41:53.853751 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:41:54.497828 containerd[1929]: time="2025-12-12T17:41:54.497777654Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:41:54.499825 systemd[1]: cri-containerd-598e6d02ddaf0c6c2255e9da9d8503fe6a32cd85a673c7375a72a8a97e11906a.scope: Deactivated successfully. Dec 12 17:41:54.500285 systemd[1]: cri-containerd-598e6d02ddaf0c6c2255e9da9d8503fe6a32cd85a673c7375a72a8a97e11906a.scope: Consumed 311ms CPU time, 189.7M memory peak, 165.9M written to disk. Dec 12 17:41:54.504189 kubelet[3460]: I1212 17:41:54.504043 3460 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:41:54.505023 containerd[1929]: time="2025-12-12T17:41:54.504812631Z" level=info msg="received container exit event container_id:\"598e6d02ddaf0c6c2255e9da9d8503fe6a32cd85a673c7375a72a8a97e11906a\" id:\"598e6d02ddaf0c6c2255e9da9d8503fe6a32cd85a673c7375a72a8a97e11906a\" pid:4229 exited_at:{seconds:1765561314 nanos:504437958}" Dec 12 17:41:54.524862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-598e6d02ddaf0c6c2255e9da9d8503fe6a32cd85a673c7375a72a8a97e11906a-rootfs.mount: Deactivated successfully. Dec 12 17:41:55.253840 systemd[1]: Created slice kubepods-burstable-podb5eb44f5_3e6b_434c_97bb_c1f43d5484ca.slice - libcontainer container kubepods-burstable-podb5eb44f5_3e6b_434c_97bb_c1f43d5484ca.slice. Dec 12 17:41:55.334432 kubelet[3460]: I1212 17:41:55.334378 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jqhn\" (UniqueName: \"kubernetes.io/projected/b5eb44f5-3e6b-434c-97bb-c1f43d5484ca-kube-api-access-6jqhn\") pod \"coredns-674b8bbfcf-596m2\" (UID: \"b5eb44f5-3e6b-434c-97bb-c1f43d5484ca\") " pod="kube-system/coredns-674b8bbfcf-596m2" Dec 12 17:41:55.334734 kubelet[3460]: I1212 17:41:55.334449 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5eb44f5-3e6b-434c-97bb-c1f43d5484ca-config-volume\") pod \"coredns-674b8bbfcf-596m2\" (UID: \"b5eb44f5-3e6b-434c-97bb-c1f43d5484ca\") " pod="kube-system/coredns-674b8bbfcf-596m2" Dec 12 17:41:55.361304 systemd[1]: Created slice kubepods-besteffort-pod174ab840_5c2d_412d_ae9d_b5c5ec64e11d.slice - libcontainer container kubepods-besteffort-pod174ab840_5c2d_412d_ae9d_b5c5ec64e11d.slice. Dec 12 17:41:55.369431 systemd[1]: Created slice kubepods-burstable-pod98333ef4_de58_4330_a351_f7e736ae9923.slice - libcontainer container kubepods-burstable-pod98333ef4_de58_4330_a351_f7e736ae9923.slice. Dec 12 17:41:55.378286 systemd[1]: Created slice kubepods-besteffort-podaf9257cb_fecf_4ff2_8249_41f13bb32168.slice - libcontainer container kubepods-besteffort-podaf9257cb_fecf_4ff2_8249_41f13bb32168.slice. Dec 12 17:41:55.381532 containerd[1929]: time="2025-12-12T17:41:55.381381070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbq92,Uid:af9257cb-fecf-4ff2-8249-41f13bb32168,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:55.385702 systemd[1]: Created slice kubepods-besteffort-poddb29c3d0_10d3_433a_ab37_84eaf363db85.slice - libcontainer container kubepods-besteffort-poddb29c3d0_10d3_433a_ab37_84eaf363db85.slice. Dec 12 17:41:55.400081 systemd[1]: Created slice kubepods-besteffort-podca5ba48c_4a22_4538_aeda_0de712e65e58.slice - libcontainer container kubepods-besteffort-podca5ba48c_4a22_4538_aeda_0de712e65e58.slice. Dec 12 17:41:55.408297 systemd[1]: Created slice kubepods-besteffort-pod87bb8c15_a7ba_4def_b41d_8b2220421e40.slice - libcontainer container kubepods-besteffort-pod87bb8c15_a7ba_4def_b41d_8b2220421e40.slice. Dec 12 17:41:55.416707 systemd[1]: Created slice kubepods-besteffort-poda4012a8d_5664_448e_9dfd_51493bbdec98.slice - libcontainer container kubepods-besteffort-poda4012a8d_5664_448e_9dfd_51493bbdec98.slice. Dec 12 17:41:55.436053 kubelet[3460]: I1212 17:41:55.435133 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlcwl\" (UniqueName: \"kubernetes.io/projected/db29c3d0-10d3-433a-ab37-84eaf363db85-kube-api-access-tlcwl\") pod \"whisker-849974f7d8-js9td\" (UID: \"db29c3d0-10d3-433a-ab37-84eaf363db85\") " pod="calico-system/whisker-849974f7d8-js9td" Dec 12 17:41:55.436053 kubelet[3460]: I1212 17:41:55.435829 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltq49\" (UniqueName: \"kubernetes.io/projected/174ab840-5c2d-412d-ae9d-b5c5ec64e11d-kube-api-access-ltq49\") pod \"calico-kube-controllers-f5b48977b-ckrpk\" (UID: \"174ab840-5c2d-412d-ae9d-b5c5ec64e11d\") " pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" Dec 12 17:41:55.436053 kubelet[3460]: I1212 17:41:55.435853 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db29c3d0-10d3-433a-ab37-84eaf363db85-whisker-ca-bundle\") pod \"whisker-849974f7d8-js9td\" (UID: \"db29c3d0-10d3-433a-ab37-84eaf363db85\") " pod="calico-system/whisker-849974f7d8-js9td" Dec 12 17:41:55.436053 kubelet[3460]: I1212 17:41:55.435864 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/87bb8c15-a7ba-4def-b41d-8b2220421e40-goldmane-key-pair\") pod \"goldmane-666569f655-cpr2l\" (UID: \"87bb8c15-a7ba-4def-b41d-8b2220421e40\") " pod="calico-system/goldmane-666569f655-cpr2l" Dec 12 17:41:55.436053 kubelet[3460]: I1212 17:41:55.435887 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a4012a8d-5664-448e-9dfd-51493bbdec98-calico-apiserver-certs\") pod \"calico-apiserver-5c586fd54-8vxtr\" (UID: \"a4012a8d-5664-448e-9dfd-51493bbdec98\") " pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" Dec 12 17:41:55.436573 kubelet[3460]: I1212 17:41:55.435904 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db29c3d0-10d3-433a-ab37-84eaf363db85-whisker-backend-key-pair\") pod \"whisker-849974f7d8-js9td\" (UID: \"db29c3d0-10d3-433a-ab37-84eaf363db85\") " pod="calico-system/whisker-849974f7d8-js9td" Dec 12 17:41:55.436573 kubelet[3460]: I1212 17:41:55.435917 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvllc\" (UniqueName: \"kubernetes.io/projected/a4012a8d-5664-448e-9dfd-51493bbdec98-kube-api-access-nvllc\") pod \"calico-apiserver-5c586fd54-8vxtr\" (UID: \"a4012a8d-5664-448e-9dfd-51493bbdec98\") " pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" Dec 12 17:41:55.436573 kubelet[3460]: I1212 17:41:55.435927 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87bb8c15-a7ba-4def-b41d-8b2220421e40-config\") pod \"goldmane-666569f655-cpr2l\" (UID: \"87bb8c15-a7ba-4def-b41d-8b2220421e40\") " pod="calico-system/goldmane-666569f655-cpr2l" Dec 12 17:41:55.436573 kubelet[3460]: I1212 17:41:55.435944 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/174ab840-5c2d-412d-ae9d-b5c5ec64e11d-tigera-ca-bundle\") pod \"calico-kube-controllers-f5b48977b-ckrpk\" (UID: \"174ab840-5c2d-412d-ae9d-b5c5ec64e11d\") " pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" Dec 12 17:41:55.436573 kubelet[3460]: I1212 17:41:55.435953 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hvnl\" (UniqueName: \"kubernetes.io/projected/98333ef4-de58-4330-a351-f7e736ae9923-kube-api-access-8hvnl\") pod \"coredns-674b8bbfcf-nvlch\" (UID: \"98333ef4-de58-4330-a351-f7e736ae9923\") " pod="kube-system/coredns-674b8bbfcf-nvlch" Dec 12 17:41:55.436658 kubelet[3460]: I1212 17:41:55.435974 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ca5ba48c-4a22-4538-aeda-0de712e65e58-calico-apiserver-certs\") pod \"calico-apiserver-5c586fd54-zk6dm\" (UID: \"ca5ba48c-4a22-4538-aeda-0de712e65e58\") " pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" Dec 12 17:41:55.436658 kubelet[3460]: I1212 17:41:55.435993 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgzcs\" (UniqueName: \"kubernetes.io/projected/ca5ba48c-4a22-4538-aeda-0de712e65e58-kube-api-access-wgzcs\") pod \"calico-apiserver-5c586fd54-zk6dm\" (UID: \"ca5ba48c-4a22-4538-aeda-0de712e65e58\") " pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" Dec 12 17:41:55.436658 kubelet[3460]: I1212 17:41:55.436003 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87bb8c15-a7ba-4def-b41d-8b2220421e40-goldmane-ca-bundle\") pod \"goldmane-666569f655-cpr2l\" (UID: \"87bb8c15-a7ba-4def-b41d-8b2220421e40\") " pod="calico-system/goldmane-666569f655-cpr2l" Dec 12 17:41:55.436658 kubelet[3460]: I1212 17:41:55.436015 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9gr8\" (UniqueName: \"kubernetes.io/projected/87bb8c15-a7ba-4def-b41d-8b2220421e40-kube-api-access-c9gr8\") pod \"goldmane-666569f655-cpr2l\" (UID: \"87bb8c15-a7ba-4def-b41d-8b2220421e40\") " pod="calico-system/goldmane-666569f655-cpr2l" Dec 12 17:41:55.436658 kubelet[3460]: I1212 17:41:55.436024 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98333ef4-de58-4330-a351-f7e736ae9923-config-volume\") pod \"coredns-674b8bbfcf-nvlch\" (UID: \"98333ef4-de58-4330-a351-f7e736ae9923\") " pod="kube-system/coredns-674b8bbfcf-nvlch" Dec 12 17:41:55.458233 containerd[1929]: time="2025-12-12T17:41:55.458196952Z" level=error msg="Failed to destroy network for sandbox \"8636b8bc1c8fe279882670d957ad3f3aff2e9bd6acaee004e3ac7ca32d74f77e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.459381 systemd[1]: run-netns-cni\x2dd5deb10b\x2db055\x2d9408\x2d27e0\x2d77260d0aaf3c.mount: Deactivated successfully. Dec 12 17:41:55.464471 containerd[1929]: time="2025-12-12T17:41:55.464421830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbq92,Uid:af9257cb-fecf-4ff2-8249-41f13bb32168,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8636b8bc1c8fe279882670d957ad3f3aff2e9bd6acaee004e3ac7ca32d74f77e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.464643 kubelet[3460]: E1212 17:41:55.464611 3460 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8636b8bc1c8fe279882670d957ad3f3aff2e9bd6acaee004e3ac7ca32d74f77e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.464681 kubelet[3460]: E1212 17:41:55.464662 3460 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8636b8bc1c8fe279882670d957ad3f3aff2e9bd6acaee004e3ac7ca32d74f77e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbq92" Dec 12 17:41:55.464706 kubelet[3460]: E1212 17:41:55.464677 3460 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8636b8bc1c8fe279882670d957ad3f3aff2e9bd6acaee004e3ac7ca32d74f77e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbq92" Dec 12 17:41:55.464732 kubelet[3460]: E1212 17:41:55.464714 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tbq92_calico-system(af9257cb-fecf-4ff2-8249-41f13bb32168)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tbq92_calico-system(af9257cb-fecf-4ff2-8249-41f13bb32168)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8636b8bc1c8fe279882670d957ad3f3aff2e9bd6acaee004e3ac7ca32d74f77e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:41:55.563324 containerd[1929]: time="2025-12-12T17:41:55.560559217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-596m2,Uid:b5eb44f5-3e6b-434c-97bb-c1f43d5484ca,Namespace:kube-system,Attempt:0,}" Dec 12 17:41:55.603391 containerd[1929]: time="2025-12-12T17:41:55.603355482Z" level=error msg="Failed to destroy network for sandbox \"768c7238b32967f752606069f93440a6d8e6433f89c8a4cf8e9d5e8cf68478aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.606541 containerd[1929]: time="2025-12-12T17:41:55.606504242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-596m2,Uid:b5eb44f5-3e6b-434c-97bb-c1f43d5484ca,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"768c7238b32967f752606069f93440a6d8e6433f89c8a4cf8e9d5e8cf68478aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.606786 kubelet[3460]: E1212 17:41:55.606738 3460 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"768c7238b32967f752606069f93440a6d8e6433f89c8a4cf8e9d5e8cf68478aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.606842 kubelet[3460]: E1212 17:41:55.606801 3460 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"768c7238b32967f752606069f93440a6d8e6433f89c8a4cf8e9d5e8cf68478aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-596m2" Dec 12 17:41:55.606842 kubelet[3460]: E1212 17:41:55.606817 3460 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"768c7238b32967f752606069f93440a6d8e6433f89c8a4cf8e9d5e8cf68478aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-596m2" Dec 12 17:41:55.606879 kubelet[3460]: E1212 17:41:55.606863 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-596m2_kube-system(b5eb44f5-3e6b-434c-97bb-c1f43d5484ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-596m2_kube-system(b5eb44f5-3e6b-434c-97bb-c1f43d5484ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"768c7238b32967f752606069f93440a6d8e6433f89c8a4cf8e9d5e8cf68478aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-596m2" podUID="b5eb44f5-3e6b-434c-97bb-c1f43d5484ca" Dec 12 17:41:55.665065 containerd[1929]: time="2025-12-12T17:41:55.665026186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f5b48977b-ckrpk,Uid:174ab840-5c2d-412d-ae9d-b5c5ec64e11d,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:55.680402 containerd[1929]: time="2025-12-12T17:41:55.680378761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nvlch,Uid:98333ef4-de58-4330-a351-f7e736ae9923,Namespace:kube-system,Attempt:0,}" Dec 12 17:41:55.697580 containerd[1929]: time="2025-12-12T17:41:55.697383029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-849974f7d8-js9td,Uid:db29c3d0-10d3-433a-ab37-84eaf363db85,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:55.712505 containerd[1929]: time="2025-12-12T17:41:55.712475742Z" level=error msg="Failed to destroy network for sandbox \"4114348ecabe9a649654d49bd74361ff658bbbf026644def71d655e3295086cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.716807 containerd[1929]: time="2025-12-12T17:41:55.716771472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f5b48977b-ckrpk,Uid:174ab840-5c2d-412d-ae9d-b5c5ec64e11d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4114348ecabe9a649654d49bd74361ff658bbbf026644def71d655e3295086cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.717328 containerd[1929]: time="2025-12-12T17:41:55.716990869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cpr2l,Uid:87bb8c15-a7ba-4def-b41d-8b2220421e40,Namespace:calico-system,Attempt:0,}" Dec 12 17:41:55.717364 kubelet[3460]: E1212 17:41:55.716921 3460 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4114348ecabe9a649654d49bd74361ff658bbbf026644def71d655e3295086cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.717364 kubelet[3460]: E1212 17:41:55.716962 3460 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4114348ecabe9a649654d49bd74361ff658bbbf026644def71d655e3295086cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" Dec 12 17:41:55.717364 kubelet[3460]: E1212 17:41:55.716976 3460 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4114348ecabe9a649654d49bd74361ff658bbbf026644def71d655e3295086cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" Dec 12 17:41:55.717431 kubelet[3460]: E1212 17:41:55.717008 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f5b48977b-ckrpk_calico-system(174ab840-5c2d-412d-ae9d-b5c5ec64e11d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f5b48977b-ckrpk_calico-system(174ab840-5c2d-412d-ae9d-b5c5ec64e11d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4114348ecabe9a649654d49bd74361ff658bbbf026644def71d655e3295086cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:41:55.720473 containerd[1929]: time="2025-12-12T17:41:55.720241639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c586fd54-8vxtr,Uid:a4012a8d-5664-448e-9dfd-51493bbdec98,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:41:55.729071 containerd[1929]: time="2025-12-12T17:41:55.729039424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c586fd54-zk6dm,Uid:ca5ba48c-4a22-4538-aeda-0de712e65e58,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:41:55.755337 containerd[1929]: time="2025-12-12T17:41:55.755258046Z" level=error msg="Failed to destroy network for sandbox \"3ece30eea961448f537119e20861c590146790a607447c6e452845f50a53aa4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.758335 containerd[1929]: time="2025-12-12T17:41:55.758312132Z" level=error msg="Failed to destroy network for sandbox \"c04b1c202f351f6c92b7f040c20ab62405fa50cd13b6cbabc764100cf25ecbb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.767533 containerd[1929]: time="2025-12-12T17:41:55.767462829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nvlch,Uid:98333ef4-de58-4330-a351-f7e736ae9923,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ece30eea961448f537119e20861c590146790a607447c6e452845f50a53aa4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.767846 kubelet[3460]: E1212 17:41:55.767710 3460 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ece30eea961448f537119e20861c590146790a607447c6e452845f50a53aa4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.767846 kubelet[3460]: E1212 17:41:55.767794 3460 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ece30eea961448f537119e20861c590146790a607447c6e452845f50a53aa4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nvlch" Dec 12 17:41:55.768314 kubelet[3460]: E1212 17:41:55.767813 3460 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ece30eea961448f537119e20861c590146790a607447c6e452845f50a53aa4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nvlch" Dec 12 17:41:55.768314 kubelet[3460]: E1212 17:41:55.768042 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nvlch_kube-system(98333ef4-de58-4330-a351-f7e736ae9923)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nvlch_kube-system(98333ef4-de58-4330-a351-f7e736ae9923)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ece30eea961448f537119e20861c590146790a607447c6e452845f50a53aa4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nvlch" podUID="98333ef4-de58-4330-a351-f7e736ae9923" Dec 12 17:41:55.771110 containerd[1929]: time="2025-12-12T17:41:55.771082304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-849974f7d8-js9td,Uid:db29c3d0-10d3-433a-ab37-84eaf363db85,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c04b1c202f351f6c92b7f040c20ab62405fa50cd13b6cbabc764100cf25ecbb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.771376 kubelet[3460]: E1212 17:41:55.771327 3460 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c04b1c202f351f6c92b7f040c20ab62405fa50cd13b6cbabc764100cf25ecbb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.771567 kubelet[3460]: E1212 17:41:55.771475 3460 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c04b1c202f351f6c92b7f040c20ab62405fa50cd13b6cbabc764100cf25ecbb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-849974f7d8-js9td" Dec 12 17:41:55.771567 kubelet[3460]: E1212 17:41:55.771492 3460 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c04b1c202f351f6c92b7f040c20ab62405fa50cd13b6cbabc764100cf25ecbb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-849974f7d8-js9td" Dec 12 17:41:55.771786 kubelet[3460]: E1212 17:41:55.771628 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-849974f7d8-js9td_calico-system(db29c3d0-10d3-433a-ab37-84eaf363db85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-849974f7d8-js9td_calico-system(db29c3d0-10d3-433a-ab37-84eaf363db85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c04b1c202f351f6c92b7f040c20ab62405fa50cd13b6cbabc764100cf25ecbb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-849974f7d8-js9td" podUID="db29c3d0-10d3-433a-ab37-84eaf363db85" Dec 12 17:41:55.771954 containerd[1929]: time="2025-12-12T17:41:55.771934291Z" level=error msg="Failed to destroy network for sandbox \"7dd28c1201d5d0a83a2dd591bb79a790fdc05c6dc807b050b7e7c558ced6f653\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.776313 containerd[1929]: time="2025-12-12T17:41:55.776285159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cpr2l,Uid:87bb8c15-a7ba-4def-b41d-8b2220421e40,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dd28c1201d5d0a83a2dd591bb79a790fdc05c6dc807b050b7e7c558ced6f653\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.776968 kubelet[3460]: E1212 17:41:55.776684 3460 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dd28c1201d5d0a83a2dd591bb79a790fdc05c6dc807b050b7e7c558ced6f653\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.776968 kubelet[3460]: E1212 17:41:55.776915 3460 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dd28c1201d5d0a83a2dd591bb79a790fdc05c6dc807b050b7e7c558ced6f653\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cpr2l" Dec 12 17:41:55.776968 kubelet[3460]: E1212 17:41:55.776943 3460 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dd28c1201d5d0a83a2dd591bb79a790fdc05c6dc807b050b7e7c558ced6f653\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-cpr2l" Dec 12 17:41:55.777146 kubelet[3460]: E1212 17:41:55.777116 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-cpr2l_calico-system(87bb8c15-a7ba-4def-b41d-8b2220421e40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-cpr2l_calico-system(87bb8c15-a7ba-4def-b41d-8b2220421e40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dd28c1201d5d0a83a2dd591bb79a790fdc05c6dc807b050b7e7c558ced6f653\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:41:55.809342 containerd[1929]: time="2025-12-12T17:41:55.809303208Z" level=error msg="Failed to destroy network for sandbox \"629da7665359742a14eea87daab532002d329886d9014c61010a551e8d3c69d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.811285 containerd[1929]: time="2025-12-12T17:41:55.811243957Z" level=error msg="Failed to destroy network for sandbox \"1108d4b2020e6ba5a0425161735488da4c62666564d8bc8867c0aaf793cc31da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.813559 containerd[1929]: time="2025-12-12T17:41:55.813436119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c586fd54-8vxtr,Uid:a4012a8d-5664-448e-9dfd-51493bbdec98,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"629da7665359742a14eea87daab532002d329886d9014c61010a551e8d3c69d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.813625 kubelet[3460]: E1212 17:41:55.813587 3460 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"629da7665359742a14eea87daab532002d329886d9014c61010a551e8d3c69d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.813625 kubelet[3460]: E1212 17:41:55.813620 3460 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"629da7665359742a14eea87daab532002d329886d9014c61010a551e8d3c69d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" Dec 12 17:41:55.813670 kubelet[3460]: E1212 17:41:55.813632 3460 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"629da7665359742a14eea87daab532002d329886d9014c61010a551e8d3c69d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" Dec 12 17:41:55.814270 kubelet[3460]: E1212 17:41:55.813701 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c586fd54-8vxtr_calico-apiserver(a4012a8d-5664-448e-9dfd-51493bbdec98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c586fd54-8vxtr_calico-apiserver(a4012a8d-5664-448e-9dfd-51493bbdec98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"629da7665359742a14eea87daab532002d329886d9014c61010a551e8d3c69d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" podUID="a4012a8d-5664-448e-9dfd-51493bbdec98" Dec 12 17:41:55.816639 containerd[1929]: time="2025-12-12T17:41:55.816608047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c586fd54-zk6dm,Uid:ca5ba48c-4a22-4538-aeda-0de712e65e58,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1108d4b2020e6ba5a0425161735488da4c62666564d8bc8867c0aaf793cc31da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.816923 kubelet[3460]: E1212 17:41:55.816744 3460 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1108d4b2020e6ba5a0425161735488da4c62666564d8bc8867c0aaf793cc31da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:41:55.816923 kubelet[3460]: E1212 17:41:55.816893 3460 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1108d4b2020e6ba5a0425161735488da4c62666564d8bc8867c0aaf793cc31da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" Dec 12 17:41:55.817418 kubelet[3460]: E1212 17:41:55.816905 3460 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1108d4b2020e6ba5a0425161735488da4c62666564d8bc8867c0aaf793cc31da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" Dec 12 17:41:55.817418 kubelet[3460]: E1212 17:41:55.817320 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c586fd54-zk6dm_calico-apiserver(ca5ba48c-4a22-4538-aeda-0de712e65e58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c586fd54-zk6dm_calico-apiserver(ca5ba48c-4a22-4538-aeda-0de712e65e58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1108d4b2020e6ba5a0425161735488da4c62666564d8bc8867c0aaf793cc31da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" podUID="ca5ba48c-4a22-4538-aeda-0de712e65e58" Dec 12 17:41:55.963368 containerd[1929]: time="2025-12-12T17:41:55.963341734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:41:56.541988 systemd[1]: run-netns-cni\x2d3b3d7a96\x2d2a2a\x2d5b12\x2dcc33\x2db46d0b3ff5b3.mount: Deactivated successfully. Dec 12 17:42:00.311100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298523191.mount: Deactivated successfully. Dec 12 17:42:00.675028 containerd[1929]: time="2025-12-12T17:42:00.674856834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:00.682924 containerd[1929]: time="2025-12-12T17:42:00.682888436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 12 17:42:00.795954 containerd[1929]: time="2025-12-12T17:42:00.795875788Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:00.853170 containerd[1929]: time="2025-12-12T17:42:00.853056145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:42:00.854201 containerd[1929]: time="2025-12-12T17:42:00.853820274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.890265184s" Dec 12 17:42:00.854201 containerd[1929]: time="2025-12-12T17:42:00.853847859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:42:00.873210 containerd[1929]: time="2025-12-12T17:42:00.872663687Z" level=info msg="CreateContainer within sandbox \"405da88ef617562499cdc0ef4f4105b1f06561de425459c79dc6ff23b53b5fc6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:42:00.897990 containerd[1929]: time="2025-12-12T17:42:00.897967497Z" level=info msg="Container 76bdfc43155269cd5830dcc5c321bf9e12d98daa52e8aaeee8df48d87f732734: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:42:00.919900 containerd[1929]: time="2025-12-12T17:42:00.919869636Z" level=info msg="CreateContainer within sandbox \"405da88ef617562499cdc0ef4f4105b1f06561de425459c79dc6ff23b53b5fc6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"76bdfc43155269cd5830dcc5c321bf9e12d98daa52e8aaeee8df48d87f732734\"" Dec 12 17:42:00.920334 containerd[1929]: time="2025-12-12T17:42:00.920312838Z" level=info msg="StartContainer for \"76bdfc43155269cd5830dcc5c321bf9e12d98daa52e8aaeee8df48d87f732734\"" Dec 12 17:42:00.921486 containerd[1929]: time="2025-12-12T17:42:00.921460977Z" level=info msg="connecting to shim 76bdfc43155269cd5830dcc5c321bf9e12d98daa52e8aaeee8df48d87f732734" address="unix:///run/containerd/s/029162bbff9d73283ca60bbb32948c5f4a0a09a1ddf9c2ade17b3b126dbb0c38" protocol=ttrpc version=3 Dec 12 17:42:00.942283 systemd[1]: Started cri-containerd-76bdfc43155269cd5830dcc5c321bf9e12d98daa52e8aaeee8df48d87f732734.scope - libcontainer container 76bdfc43155269cd5830dcc5c321bf9e12d98daa52e8aaeee8df48d87f732734. Dec 12 17:42:01.016089 containerd[1929]: time="2025-12-12T17:42:01.016062624Z" level=info msg="StartContainer for \"76bdfc43155269cd5830dcc5c321bf9e12d98daa52e8aaeee8df48d87f732734\" returns successfully" Dec 12 17:42:01.373478 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:42:01.373587 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:42:01.574724 kubelet[3460]: I1212 17:42:01.574693 3460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db29c3d0-10d3-433a-ab37-84eaf363db85-whisker-ca-bundle\") pod \"db29c3d0-10d3-433a-ab37-84eaf363db85\" (UID: \"db29c3d0-10d3-433a-ab37-84eaf363db85\") " Dec 12 17:42:01.575250 kubelet[3460]: I1212 17:42:01.575112 3460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db29c3d0-10d3-433a-ab37-84eaf363db85-whisker-backend-key-pair\") pod \"db29c3d0-10d3-433a-ab37-84eaf363db85\" (UID: \"db29c3d0-10d3-433a-ab37-84eaf363db85\") " Dec 12 17:42:01.579971 kubelet[3460]: I1212 17:42:01.579577 3460 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db29c3d0-10d3-433a-ab37-84eaf363db85-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "db29c3d0-10d3-433a-ab37-84eaf363db85" (UID: "db29c3d0-10d3-433a-ab37-84eaf363db85"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:42:01.579971 kubelet[3460]: I1212 17:42:01.579624 3460 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlcwl\" (UniqueName: \"kubernetes.io/projected/db29c3d0-10d3-433a-ab37-84eaf363db85-kube-api-access-tlcwl\") pod \"db29c3d0-10d3-433a-ab37-84eaf363db85\" (UID: \"db29c3d0-10d3-433a-ab37-84eaf363db85\") " Dec 12 17:42:01.583650 systemd[1]: var-lib-kubelet-pods-db29c3d0\x2d10d3\x2d433a\x2dab37\x2d84eaf363db85-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtlcwl.mount: Deactivated successfully. Dec 12 17:42:01.583729 systemd[1]: var-lib-kubelet-pods-db29c3d0\x2d10d3\x2d433a\x2dab37\x2d84eaf363db85-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:42:01.584470 kubelet[3460]: I1212 17:42:01.584197 3460 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db29c3d0-10d3-433a-ab37-84eaf363db85-kube-api-access-tlcwl" (OuterVolumeSpecName: "kube-api-access-tlcwl") pod "db29c3d0-10d3-433a-ab37-84eaf363db85" (UID: "db29c3d0-10d3-433a-ab37-84eaf363db85"). InnerVolumeSpecName "kube-api-access-tlcwl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:42:01.584470 kubelet[3460]: I1212 17:42:01.584316 3460 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db29c3d0-10d3-433a-ab37-84eaf363db85-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "db29c3d0-10d3-433a-ab37-84eaf363db85" (UID: "db29c3d0-10d3-433a-ab37-84eaf363db85"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:42:01.681064 kubelet[3460]: I1212 17:42:01.680790 3460 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db29c3d0-10d3-433a-ab37-84eaf363db85-whisker-ca-bundle\") on node \"ci-4459.2.2-a-c1c6b7e9cf\" DevicePath \"\"" Dec 12 17:42:01.681064 kubelet[3460]: I1212 17:42:01.680820 3460 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/db29c3d0-10d3-433a-ab37-84eaf363db85-whisker-backend-key-pair\") on node \"ci-4459.2.2-a-c1c6b7e9cf\" DevicePath \"\"" Dec 12 17:42:01.681064 kubelet[3460]: I1212 17:42:01.680830 3460 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tlcwl\" (UniqueName: \"kubernetes.io/projected/db29c3d0-10d3-433a-ab37-84eaf363db85-kube-api-access-tlcwl\") on node \"ci-4459.2.2-a-c1c6b7e9cf\" DevicePath \"\"" Dec 12 17:42:01.859109 systemd[1]: Removed slice kubepods-besteffort-poddb29c3d0_10d3_433a_ab37_84eaf363db85.slice - libcontainer container kubepods-besteffort-poddb29c3d0_10d3_433a_ab37_84eaf363db85.slice. Dec 12 17:42:02.005869 kubelet[3460]: I1212 17:42:02.005758 3460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d78cr" podStartSLOduration=2.099442051 podStartE2EDuration="17.005743439s" podCreationTimestamp="2025-12-12 17:41:45 +0000 UTC" firstStartedPulling="2025-12-12 17:41:45.949085611 +0000 UTC m=+20.184503242" lastFinishedPulling="2025-12-12 17:42:00.855386999 +0000 UTC m=+35.090804630" observedRunningTime="2025-12-12 17:42:02.003833443 +0000 UTC m=+36.239251082" watchObservedRunningTime="2025-12-12 17:42:02.005743439 +0000 UTC m=+36.241161078" Dec 12 17:42:02.088131 systemd[1]: Created slice kubepods-besteffort-pod99d456ff_41e9_43a8_8405_1dee55d2f1c2.slice - libcontainer container kubepods-besteffort-pod99d456ff_41e9_43a8_8405_1dee55d2f1c2.slice. Dec 12 17:42:02.182839 kubelet[3460]: I1212 17:42:02.182808 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw67r\" (UniqueName: \"kubernetes.io/projected/99d456ff-41e9-43a8-8405-1dee55d2f1c2-kube-api-access-vw67r\") pod \"whisker-67ffbcfc86-jjn58\" (UID: \"99d456ff-41e9-43a8-8405-1dee55d2f1c2\") " pod="calico-system/whisker-67ffbcfc86-jjn58" Dec 12 17:42:02.183005 kubelet[3460]: I1212 17:42:02.182991 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d456ff-41e9-43a8-8405-1dee55d2f1c2-whisker-ca-bundle\") pod \"whisker-67ffbcfc86-jjn58\" (UID: \"99d456ff-41e9-43a8-8405-1dee55d2f1c2\") " pod="calico-system/whisker-67ffbcfc86-jjn58" Dec 12 17:42:02.183159 kubelet[3460]: I1212 17:42:02.183109 3460 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/99d456ff-41e9-43a8-8405-1dee55d2f1c2-whisker-backend-key-pair\") pod \"whisker-67ffbcfc86-jjn58\" (UID: \"99d456ff-41e9-43a8-8405-1dee55d2f1c2\") " pod="calico-system/whisker-67ffbcfc86-jjn58" Dec 12 17:42:02.391920 containerd[1929]: time="2025-12-12T17:42:02.391676792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67ffbcfc86-jjn58,Uid:99d456ff-41e9-43a8-8405-1dee55d2f1c2,Namespace:calico-system,Attempt:0,}" Dec 12 17:42:02.498170 systemd-networkd[1491]: calidd412de773f: Link UP Dec 12 17:42:02.499337 systemd-networkd[1491]: calidd412de773f: Gained carrier Dec 12 17:42:02.517130 containerd[1929]: 2025-12-12 17:42:02.416 [INFO][4580] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:42:02.517130 containerd[1929]: 2025-12-12 17:42:02.440 [INFO][4580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-eth0 whisker-67ffbcfc86- calico-system 99d456ff-41e9-43a8-8405-1dee55d2f1c2 876 0 2025-12-12 17:42:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67ffbcfc86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.2-a-c1c6b7e9cf whisker-67ffbcfc86-jjn58 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidd412de773f [] [] }} ContainerID="9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" Namespace="calico-system" Pod="whisker-67ffbcfc86-jjn58" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-" Dec 12 17:42:02.517130 containerd[1929]: 2025-12-12 17:42:02.440 [INFO][4580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" Namespace="calico-system" Pod="whisker-67ffbcfc86-jjn58" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-eth0" Dec 12 17:42:02.517130 containerd[1929]: 2025-12-12 17:42:02.458 [INFO][4591] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" HandleID="k8s-pod-network.9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-eth0" Dec 12 17:42:02.517311 containerd[1929]: 2025-12-12 17:42:02.458 [INFO][4591] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" HandleID="k8s-pod-network.9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cafe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-c1c6b7e9cf", "pod":"whisker-67ffbcfc86-jjn58", "timestamp":"2025-12-12 17:42:02.458570645 +0000 UTC"}, Hostname:"ci-4459.2.2-a-c1c6b7e9cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:42:02.517311 containerd[1929]: 2025-12-12 17:42:02.458 [INFO][4591] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:42:02.517311 containerd[1929]: 2025-12-12 17:42:02.458 [INFO][4591] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:42:02.517311 containerd[1929]: 2025-12-12 17:42:02.458 [INFO][4591] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-c1c6b7e9cf' Dec 12 17:42:02.517311 containerd[1929]: 2025-12-12 17:42:02.463 [INFO][4591] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:02.517311 containerd[1929]: 2025-12-12 17:42:02.467 [INFO][4591] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:02.517311 containerd[1929]: 2025-12-12 17:42:02.471 [INFO][4591] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:02.517311 containerd[1929]: 2025-12-12 17:42:02.473 [INFO][4591] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:02.517311 containerd[1929]: 2025-12-12 17:42:02.474 [INFO][4591] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:02.517440 containerd[1929]: 2025-12-12 17:42:02.474 [INFO][4591] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:02.517440 containerd[1929]: 2025-12-12 17:42:02.476 [INFO][4591] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7 Dec 12 17:42:02.517440 containerd[1929]: 2025-12-12 17:42:02.480 [INFO][4591] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:02.517440 containerd[1929]: 2025-12-12 17:42:02.489 [INFO][4591] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.193/26] block=192.168.69.192/26 handle="k8s-pod-network.9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:02.517440 containerd[1929]: 2025-12-12 17:42:02.489 [INFO][4591] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.193/26] handle="k8s-pod-network.9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:02.517440 containerd[1929]: 2025-12-12 17:42:02.489 [INFO][4591] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:42:02.517440 containerd[1929]: 2025-12-12 17:42:02.489 [INFO][4591] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.193/26] IPv6=[] ContainerID="9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" HandleID="k8s-pod-network.9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-eth0" Dec 12 17:42:02.517531 containerd[1929]: 2025-12-12 17:42:02.492 [INFO][4580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" Namespace="calico-system" Pod="whisker-67ffbcfc86-jjn58" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-eth0", GenerateName:"whisker-67ffbcfc86-", Namespace:"calico-system", SelfLink:"", UID:"99d456ff-41e9-43a8-8405-1dee55d2f1c2", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67ffbcfc86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"", Pod:"whisker-67ffbcfc86-jjn58", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidd412de773f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:02.517531 containerd[1929]: 2025-12-12 17:42:02.492 [INFO][4580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.193/32] ContainerID="9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" Namespace="calico-system" Pod="whisker-67ffbcfc86-jjn58" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-eth0" Dec 12 17:42:02.517578 containerd[1929]: 2025-12-12 17:42:02.492 [INFO][4580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd412de773f ContainerID="9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" Namespace="calico-system" Pod="whisker-67ffbcfc86-jjn58" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-eth0" Dec 12 17:42:02.517578 containerd[1929]: 2025-12-12 17:42:02.499 [INFO][4580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" Namespace="calico-system" Pod="whisker-67ffbcfc86-jjn58" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-eth0" Dec 12 17:42:02.517609 containerd[1929]: 2025-12-12 17:42:02.499 [INFO][4580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" Namespace="calico-system" Pod="whisker-67ffbcfc86-jjn58" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-eth0", GenerateName:"whisker-67ffbcfc86-", Namespace:"calico-system", SelfLink:"", UID:"99d456ff-41e9-43a8-8405-1dee55d2f1c2", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 42, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67ffbcfc86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7", Pod:"whisker-67ffbcfc86-jjn58", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.69.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidd412de773f", MAC:"4a:f5:21:1b:8d:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:02.517641 containerd[1929]: 2025-12-12 17:42:02.514 [INFO][4580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" Namespace="calico-system" Pod="whisker-67ffbcfc86-jjn58" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-whisker--67ffbcfc86--jjn58-eth0" Dec 12 17:42:02.563913 containerd[1929]: time="2025-12-12T17:42:02.563849164Z" level=info msg="connecting to shim 9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7" address="unix:///run/containerd/s/b43771140374809b8416b7b46a24ae0350f2919baae641e54e9e8a975e8bd014" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:42:02.585310 systemd[1]: Started cri-containerd-9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7.scope - libcontainer container 9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7. Dec 12 17:42:02.616409 containerd[1929]: time="2025-12-12T17:42:02.616277954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67ffbcfc86-jjn58,Uid:99d456ff-41e9-43a8-8405-1dee55d2f1c2,Namespace:calico-system,Attempt:0,} returns sandbox id \"9db02f9179176c315c511e2825d5aa9e1c536cb55e9a33d87e656a38af472ed7\"" Dec 12 17:42:02.617851 containerd[1929]: time="2025-12-12T17:42:02.617701931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:42:02.907630 containerd[1929]: time="2025-12-12T17:42:02.907534050Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:03.088337 containerd[1929]: time="2025-12-12T17:42:03.088285772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:42:03.088631 containerd[1929]: time="2025-12-12T17:42:03.088517114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:42:03.092709 kubelet[3460]: E1212 17:42:03.092669 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:42:03.093033 kubelet[3460]: E1212 17:42:03.092720 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:42:03.103218 kubelet[3460]: E1212 17:42:03.103162 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2525d8452e6e40b4b30dca31db45508c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vw67r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67ffbcfc86-jjn58_calico-system(99d456ff-41e9-43a8-8405-1dee55d2f1c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:03.105507 containerd[1929]: time="2025-12-12T17:42:03.105482739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:42:03.458027 containerd[1929]: time="2025-12-12T17:42:03.457852219Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:03.461026 containerd[1929]: time="2025-12-12T17:42:03.460937298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:42:03.461026 containerd[1929]: time="2025-12-12T17:42:03.460970403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:42:03.461269 kubelet[3460]: E1212 17:42:03.461226 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:42:03.461330 kubelet[3460]: E1212 17:42:03.461281 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:42:03.461438 kubelet[3460]: E1212 17:42:03.461397 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vw67r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67ffbcfc86-jjn58_calico-system(99d456ff-41e9-43a8-8405-1dee55d2f1c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:03.462640 kubelet[3460]: E1212 17:42:03.462599 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67ffbcfc86-jjn58" podUID="99d456ff-41e9-43a8-8405-1dee55d2f1c2" Dec 12 17:42:03.561359 systemd-networkd[1491]: calidd412de773f: Gained IPv6LL Dec 12 17:42:03.855257 kubelet[3460]: I1212 17:42:03.855145 3460 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db29c3d0-10d3-433a-ab37-84eaf363db85" path="/var/lib/kubelet/pods/db29c3d0-10d3-433a-ab37-84eaf363db85/volumes" Dec 12 17:42:03.990295 kubelet[3460]: E1212 17:42:03.990254 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67ffbcfc86-jjn58" podUID="99d456ff-41e9-43a8-8405-1dee55d2f1c2" Dec 12 17:42:06.235242 kubelet[3460]: I1212 17:42:06.234839 3460 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:42:06.852891 containerd[1929]: time="2025-12-12T17:42:06.852813456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f5b48977b-ckrpk,Uid:174ab840-5c2d-412d-ae9d-b5c5ec64e11d,Namespace:calico-system,Attempt:0,}" Dec 12 17:42:06.853577 containerd[1929]: time="2025-12-12T17:42:06.853464606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c586fd54-zk6dm,Uid:ca5ba48c-4a22-4538-aeda-0de712e65e58,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:42:07.014286 systemd-networkd[1491]: cali7901c23800f: Link UP Dec 12 17:42:07.014443 systemd-networkd[1491]: cali7901c23800f: Gained carrier Dec 12 17:42:07.029789 containerd[1929]: 2025-12-12 17:42:06.907 [INFO][4835] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:42:07.029789 containerd[1929]: 2025-12-12 17:42:06.923 [INFO][4835] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-eth0 calico-kube-controllers-f5b48977b- calico-system 174ab840-5c2d-412d-ae9d-b5c5ec64e11d 807 0 2025-12-12 17:41:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f5b48977b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.2-a-c1c6b7e9cf calico-kube-controllers-f5b48977b-ckrpk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7901c23800f [] [] }} ContainerID="f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" Namespace="calico-system" Pod="calico-kube-controllers-f5b48977b-ckrpk" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-" Dec 12 17:42:07.029789 containerd[1929]: 2025-12-12 17:42:06.923 [INFO][4835] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" Namespace="calico-system" Pod="calico-kube-controllers-f5b48977b-ckrpk" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-eth0" Dec 12 17:42:07.029789 containerd[1929]: 2025-12-12 17:42:06.965 [INFO][4866] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" HandleID="k8s-pod-network.f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-eth0" Dec 12 17:42:07.030111 containerd[1929]: 2025-12-12 17:42:06.965 [INFO][4866] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" HandleID="k8s-pod-network.f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b6d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-c1c6b7e9cf", "pod":"calico-kube-controllers-f5b48977b-ckrpk", "timestamp":"2025-12-12 17:42:06.965197614 +0000 UTC"}, Hostname:"ci-4459.2.2-a-c1c6b7e9cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:42:07.030111 containerd[1929]: 2025-12-12 17:42:06.965 [INFO][4866] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:42:07.030111 containerd[1929]: 2025-12-12 17:42:06.965 [INFO][4866] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:42:07.030111 containerd[1929]: 2025-12-12 17:42:06.965 [INFO][4866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-c1c6b7e9cf' Dec 12 17:42:07.030111 containerd[1929]: 2025-12-12 17:42:06.973 [INFO][4866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.030111 containerd[1929]: 2025-12-12 17:42:06.977 [INFO][4866] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.030111 containerd[1929]: 2025-12-12 17:42:06.983 [INFO][4866] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.030111 containerd[1929]: 2025-12-12 17:42:06.986 [INFO][4866] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.030111 containerd[1929]: 2025-12-12 17:42:06.987 [INFO][4866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.030517 containerd[1929]: 2025-12-12 17:42:06.987 [INFO][4866] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.030517 containerd[1929]: 2025-12-12 17:42:06.989 [INFO][4866] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449 Dec 12 17:42:07.030517 containerd[1929]: 2025-12-12 17:42:06.994 [INFO][4866] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.030517 containerd[1929]: 2025-12-12 17:42:07.003 [INFO][4866] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.194/26] block=192.168.69.192/26 handle="k8s-pod-network.f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.030517 containerd[1929]: 2025-12-12 17:42:07.003 [INFO][4866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.194/26] handle="k8s-pod-network.f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.030517 containerd[1929]: 2025-12-12 17:42:07.003 [INFO][4866] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:42:07.030517 containerd[1929]: 2025-12-12 17:42:07.003 [INFO][4866] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.194/26] IPv6=[] ContainerID="f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" HandleID="k8s-pod-network.f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-eth0" Dec 12 17:42:07.030621 containerd[1929]: 2025-12-12 17:42:07.006 [INFO][4835] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" Namespace="calico-system" Pod="calico-kube-controllers-f5b48977b-ckrpk" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-eth0", GenerateName:"calico-kube-controllers-f5b48977b-", Namespace:"calico-system", SelfLink:"", UID:"174ab840-5c2d-412d-ae9d-b5c5ec64e11d", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f5b48977b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"", Pod:"calico-kube-controllers-f5b48977b-ckrpk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7901c23800f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:07.030664 containerd[1929]: 2025-12-12 17:42:07.006 [INFO][4835] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.194/32] ContainerID="f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" Namespace="calico-system" Pod="calico-kube-controllers-f5b48977b-ckrpk" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-eth0" Dec 12 17:42:07.030664 containerd[1929]: 2025-12-12 17:42:07.006 [INFO][4835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7901c23800f ContainerID="f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" Namespace="calico-system" Pod="calico-kube-controllers-f5b48977b-ckrpk" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-eth0" Dec 12 17:42:07.030664 containerd[1929]: 2025-12-12 17:42:07.015 [INFO][4835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" Namespace="calico-system" Pod="calico-kube-controllers-f5b48977b-ckrpk" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-eth0" Dec 12 17:42:07.030707 containerd[1929]: 2025-12-12 17:42:07.016 [INFO][4835] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" Namespace="calico-system" Pod="calico-kube-controllers-f5b48977b-ckrpk" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-eth0", GenerateName:"calico-kube-controllers-f5b48977b-", Namespace:"calico-system", SelfLink:"", UID:"174ab840-5c2d-412d-ae9d-b5c5ec64e11d", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f5b48977b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449", Pod:"calico-kube-controllers-f5b48977b-ckrpk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.69.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7901c23800f", MAC:"16:21:74:d4:c2:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:07.030739 containerd[1929]: 2025-12-12 17:42:07.027 [INFO][4835] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" Namespace="calico-system" Pod="calico-kube-controllers-f5b48977b-ckrpk" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--kube--controllers--f5b48977b--ckrpk-eth0" Dec 12 17:42:07.097971 containerd[1929]: time="2025-12-12T17:42:07.097933919Z" level=info msg="connecting to shim f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449" address="unix:///run/containerd/s/6f737a819de3a07c1712bc3dc1c3337b973cc9942826bba0204e529ff45fffae" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:42:07.129377 systemd[1]: Started cri-containerd-f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449.scope - libcontainer container f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449. Dec 12 17:42:07.147586 systemd-networkd[1491]: cali2df8a3ff54b: Link UP Dec 12 17:42:07.149742 systemd-networkd[1491]: cali2df8a3ff54b: Gained carrier Dec 12 17:42:07.172731 containerd[1929]: 2025-12-12 17:42:06.915 [INFO][4848] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:42:07.172731 containerd[1929]: 2025-12-12 17:42:06.932 [INFO][4848] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-eth0 calico-apiserver-5c586fd54- calico-apiserver ca5ba48c-4a22-4538-aeda-0de712e65e58 811 0 2025-12-12 17:41:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c586fd54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-a-c1c6b7e9cf calico-apiserver-5c586fd54-zk6dm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2df8a3ff54b [] [] }} ContainerID="2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-zk6dm" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-" Dec 12 17:42:07.172731 containerd[1929]: 2025-12-12 17:42:06.932 [INFO][4848] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-zk6dm" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-eth0" Dec 12 17:42:07.172731 containerd[1929]: 2025-12-12 17:42:06.973 [INFO][4871] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" HandleID="k8s-pod-network.2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-eth0" Dec 12 17:42:07.173424 containerd[1929]: 2025-12-12 17:42:06.973 [INFO][4871] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" HandleID="k8s-pod-network.2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003213b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-a-c1c6b7e9cf", "pod":"calico-apiserver-5c586fd54-zk6dm", "timestamp":"2025-12-12 17:42:06.97352219 +0000 UTC"}, Hostname:"ci-4459.2.2-a-c1c6b7e9cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:42:07.173424 containerd[1929]: 2025-12-12 17:42:06.973 [INFO][4871] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:42:07.173424 containerd[1929]: 2025-12-12 17:42:07.003 [INFO][4871] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:42:07.173424 containerd[1929]: 2025-12-12 17:42:07.003 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-c1c6b7e9cf' Dec 12 17:42:07.173424 containerd[1929]: 2025-12-12 17:42:07.075 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.173424 containerd[1929]: 2025-12-12 17:42:07.088 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.173424 containerd[1929]: 2025-12-12 17:42:07.100 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.173424 containerd[1929]: 2025-12-12 17:42:07.105 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.173424 containerd[1929]: 2025-12-12 17:42:07.108 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.173577 containerd[1929]: 2025-12-12 17:42:07.109 [INFO][4871] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.173577 containerd[1929]: 2025-12-12 17:42:07.112 [INFO][4871] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0 Dec 12 17:42:07.173577 containerd[1929]: 2025-12-12 17:42:07.127 [INFO][4871] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.173577 containerd[1929]: 2025-12-12 17:42:07.134 [INFO][4871] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.195/26] block=192.168.69.192/26 handle="k8s-pod-network.2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.173577 containerd[1929]: 2025-12-12 17:42:07.134 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.195/26] handle="k8s-pod-network.2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.173577 containerd[1929]: 2025-12-12 17:42:07.135 [INFO][4871] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:42:07.173577 containerd[1929]: 2025-12-12 17:42:07.135 [INFO][4871] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.195/26] IPv6=[] ContainerID="2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" HandleID="k8s-pod-network.2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-eth0" Dec 12 17:42:07.173702 containerd[1929]: 2025-12-12 17:42:07.139 [INFO][4848] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-zk6dm" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-eth0", GenerateName:"calico-apiserver-5c586fd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca5ba48c-4a22-4538-aeda-0de712e65e58", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c586fd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"", Pod:"calico-apiserver-5c586fd54-zk6dm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2df8a3ff54b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:07.173739 containerd[1929]: 2025-12-12 17:42:07.139 [INFO][4848] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.195/32] ContainerID="2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-zk6dm" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-eth0" Dec 12 17:42:07.173739 containerd[1929]: 2025-12-12 17:42:07.139 [INFO][4848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2df8a3ff54b ContainerID="2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-zk6dm" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-eth0" Dec 12 17:42:07.173739 containerd[1929]: 2025-12-12 17:42:07.150 [INFO][4848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-zk6dm" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-eth0" Dec 12 17:42:07.173779 containerd[1929]: 2025-12-12 17:42:07.151 [INFO][4848] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-zk6dm" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-eth0", GenerateName:"calico-apiserver-5c586fd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca5ba48c-4a22-4538-aeda-0de712e65e58", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c586fd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0", Pod:"calico-apiserver-5c586fd54-zk6dm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2df8a3ff54b", MAC:"6e:41:9c:7e:03:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:07.173813 containerd[1929]: 2025-12-12 17:42:07.164 [INFO][4848] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-zk6dm" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--zk6dm-eth0" Dec 12 17:42:07.181600 containerd[1929]: time="2025-12-12T17:42:07.181571073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f5b48977b-ckrpk,Uid:174ab840-5c2d-412d-ae9d-b5c5ec64e11d,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9634e001ffaa4643eb07caa70de9a3b4a6b94ef1fb838b6e56933091872a449\"" Dec 12 17:42:07.190939 containerd[1929]: time="2025-12-12T17:42:07.190737828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:42:07.243264 containerd[1929]: time="2025-12-12T17:42:07.243232701Z" level=info msg="connecting to shim 2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0" address="unix:///run/containerd/s/06c9fa11cbceebc6c6e07f56e32e9131483d32e49764f8feb227ad5cf86a734c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:42:07.269318 systemd[1]: Started cri-containerd-2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0.scope - libcontainer container 2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0. Dec 12 17:42:07.306953 containerd[1929]: time="2025-12-12T17:42:07.306921854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c586fd54-zk6dm,Uid:ca5ba48c-4a22-4538-aeda-0de712e65e58,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2b003f410c9b5dd07e9fbe495c492e26fe55f715eeb69662655751cd9c550eb0\"" Dec 12 17:42:07.475535 containerd[1929]: time="2025-12-12T17:42:07.475400470Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:07.616689 containerd[1929]: time="2025-12-12T17:42:07.616644210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:42:07.616881 containerd[1929]: time="2025-12-12T17:42:07.616691083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:42:07.617016 kubelet[3460]: E1212 17:42:07.616978 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:42:07.617840 kubelet[3460]: E1212 17:42:07.617024 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:42:07.617840 kubelet[3460]: E1212 17:42:07.617308 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ltq49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f5b48977b-ckrpk_calico-system(174ab840-5c2d-412d-ae9d-b5c5ec64e11d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:07.617934 containerd[1929]: time="2025-12-12T17:42:07.617283057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:42:07.619180 kubelet[3460]: E1212 17:42:07.619148 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:42:07.672694 systemd-networkd[1491]: vxlan.calico: Link UP Dec 12 17:42:07.672700 systemd-networkd[1491]: vxlan.calico: Gained carrier Dec 12 17:42:07.854771 containerd[1929]: time="2025-12-12T17:42:07.854376470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c586fd54-8vxtr,Uid:a4012a8d-5664-448e-9dfd-51493bbdec98,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:42:07.946547 systemd-networkd[1491]: cali2072de51165: Link UP Dec 12 17:42:07.947559 systemd-networkd[1491]: cali2072de51165: Gained carrier Dec 12 17:42:07.960244 containerd[1929]: 2025-12-12 17:42:07.888 [INFO][5101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-eth0 calico-apiserver-5c586fd54- calico-apiserver a4012a8d-5664-448e-9dfd-51493bbdec98 812 0 2025-12-12 17:41:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c586fd54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-a-c1c6b7e9cf calico-apiserver-5c586fd54-8vxtr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2072de51165 [] [] }} ContainerID="e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-8vxtr" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-" Dec 12 17:42:07.960244 containerd[1929]: 2025-12-12 17:42:07.888 [INFO][5101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-8vxtr" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-eth0" Dec 12 17:42:07.960244 containerd[1929]: 2025-12-12 17:42:07.905 [INFO][5117] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" HandleID="k8s-pod-network.e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-eth0" Dec 12 17:42:07.960673 containerd[1929]: 2025-12-12 17:42:07.905 [INFO][5117] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" HandleID="k8s-pod-network.e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-a-c1c6b7e9cf", "pod":"calico-apiserver-5c586fd54-8vxtr", "timestamp":"2025-12-12 17:42:07.905763679 +0000 UTC"}, Hostname:"ci-4459.2.2-a-c1c6b7e9cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:42:07.960673 containerd[1929]: 2025-12-12 17:42:07.905 [INFO][5117] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:42:07.960673 containerd[1929]: 2025-12-12 17:42:07.906 [INFO][5117] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:42:07.960673 containerd[1929]: 2025-12-12 17:42:07.906 [INFO][5117] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-c1c6b7e9cf' Dec 12 17:42:07.960673 containerd[1929]: 2025-12-12 17:42:07.911 [INFO][5117] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.960673 containerd[1929]: 2025-12-12 17:42:07.916 [INFO][5117] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.960673 containerd[1929]: 2025-12-12 17:42:07.922 [INFO][5117] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.960673 containerd[1929]: 2025-12-12 17:42:07.925 [INFO][5117] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.960673 containerd[1929]: 2025-12-12 17:42:07.926 [INFO][5117] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.960832 containerd[1929]: 2025-12-12 17:42:07.927 [INFO][5117] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.960832 containerd[1929]: 2025-12-12 17:42:07.928 [INFO][5117] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823 Dec 12 17:42:07.960832 containerd[1929]: 2025-12-12 17:42:07.932 [INFO][5117] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.960832 containerd[1929]: 2025-12-12 17:42:07.941 [INFO][5117] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.196/26] block=192.168.69.192/26 handle="k8s-pod-network.e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.960832 containerd[1929]: 2025-12-12 17:42:07.941 [INFO][5117] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.196/26] handle="k8s-pod-network.e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:07.960832 containerd[1929]: 2025-12-12 17:42:07.941 [INFO][5117] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:42:07.960832 containerd[1929]: 2025-12-12 17:42:07.941 [INFO][5117] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.196/26] IPv6=[] ContainerID="e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" HandleID="k8s-pod-network.e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-eth0" Dec 12 17:42:07.961127 containerd[1929]: 2025-12-12 17:42:07.944 [INFO][5101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-8vxtr" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-eth0", GenerateName:"calico-apiserver-5c586fd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4012a8d-5664-448e-9dfd-51493bbdec98", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c586fd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"", Pod:"calico-apiserver-5c586fd54-8vxtr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2072de51165", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:07.961372 containerd[1929]: 2025-12-12 17:42:07.944 [INFO][5101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.196/32] ContainerID="e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-8vxtr" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-eth0" Dec 12 17:42:07.961372 containerd[1929]: 2025-12-12 17:42:07.944 [INFO][5101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2072de51165 ContainerID="e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-8vxtr" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-eth0" Dec 12 17:42:07.961372 containerd[1929]: 2025-12-12 17:42:07.948 [INFO][5101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-8vxtr" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-eth0" Dec 12 17:42:07.961446 containerd[1929]: 2025-12-12 17:42:07.948 [INFO][5101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-8vxtr" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-eth0", GenerateName:"calico-apiserver-5c586fd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4012a8d-5664-448e-9dfd-51493bbdec98", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c586fd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823", Pod:"calico-apiserver-5c586fd54-8vxtr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.69.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2072de51165", MAC:"fa:fd:5c:bc:c8:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:07.961491 containerd[1929]: 2025-12-12 17:42:07.957 [INFO][5101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" Namespace="calico-apiserver" Pod="calico-apiserver-5c586fd54-8vxtr" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-calico--apiserver--5c586fd54--8vxtr-eth0" Dec 12 17:42:07.998161 kubelet[3460]: E1212 17:42:07.998128 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:42:08.114313 containerd[1929]: time="2025-12-12T17:42:08.113727336Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:08.169361 systemd-networkd[1491]: cali7901c23800f: Gained IPv6LL Dec 12 17:42:08.315499 containerd[1929]: time="2025-12-12T17:42:08.315385997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:42:08.315499 containerd[1929]: time="2025-12-12T17:42:08.315477591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:42:08.315776 kubelet[3460]: E1212 17:42:08.315746 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:42:08.315887 kubelet[3460]: E1212 17:42:08.315873 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:42:08.316114 kubelet[3460]: E1212 17:42:08.316054 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgzcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c586fd54-zk6dm_calico-apiserver(ca5ba48c-4a22-4538-aeda-0de712e65e58): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:08.318029 kubelet[3460]: E1212 17:42:08.317998 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" podUID="ca5ba48c-4a22-4538-aeda-0de712e65e58" Dec 12 17:42:08.450040 containerd[1929]: time="2025-12-12T17:42:08.449732617Z" level=info msg="connecting to shim e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823" address="unix:///run/containerd/s/fe45d8f299077d40d2d15d72ce13b0e5f51ac6c7539ff895588b06150452d7f0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:42:08.474294 systemd[1]: Started cri-containerd-e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823.scope - libcontainer container e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823. Dec 12 17:42:08.489271 systemd-networkd[1491]: cali2df8a3ff54b: Gained IPv6LL Dec 12 17:42:08.503859 containerd[1929]: time="2025-12-12T17:42:08.503821286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c586fd54-8vxtr,Uid:a4012a8d-5664-448e-9dfd-51493bbdec98,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e06429d5f3b4fca0beb224d63a5dafa2b68f1710ed90a64695dbfb76efb2a823\"" Dec 12 17:42:08.505608 containerd[1929]: time="2025-12-12T17:42:08.505540980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:42:08.748633 containerd[1929]: time="2025-12-12T17:42:08.748537588Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:08.752471 containerd[1929]: time="2025-12-12T17:42:08.752381625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:42:08.752471 containerd[1929]: time="2025-12-12T17:42:08.752440890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:42:08.752606 kubelet[3460]: E1212 17:42:08.752569 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:42:08.753165 kubelet[3460]: E1212 17:42:08.752613 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:42:08.753165 kubelet[3460]: E1212 17:42:08.752723 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvllc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c586fd54-8vxtr_calico-apiserver(a4012a8d-5664-448e-9dfd-51493bbdec98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:08.754119 kubelet[3460]: E1212 17:42:08.754086 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" podUID="a4012a8d-5664-448e-9dfd-51493bbdec98" Dec 12 17:42:08.853387 containerd[1929]: time="2025-12-12T17:42:08.853272585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nvlch,Uid:98333ef4-de58-4330-a351-f7e736ae9923,Namespace:kube-system,Attempt:0,}" Dec 12 17:42:08.853871 containerd[1929]: time="2025-12-12T17:42:08.853756948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-596m2,Uid:b5eb44f5-3e6b-434c-97bb-c1f43d5484ca,Namespace:kube-system,Attempt:0,}" Dec 12 17:42:08.985681 systemd-networkd[1491]: caliedbde4548fe: Link UP Dec 12 17:42:08.985997 systemd-networkd[1491]: caliedbde4548fe: Gained carrier Dec 12 17:42:09.005318 kubelet[3460]: E1212 17:42:09.005135 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" podUID="ca5ba48c-4a22-4538-aeda-0de712e65e58" Dec 12 17:42:09.007300 kubelet[3460]: E1212 17:42:09.006245 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" podUID="a4012a8d-5664-448e-9dfd-51493bbdec98" Dec 12 17:42:09.007489 containerd[1929]: 2025-12-12 17:42:08.906 [INFO][5184] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-eth0 coredns-674b8bbfcf- kube-system 98333ef4-de58-4330-a351-f7e736ae9923 809 0 2025-12-12 17:41:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-a-c1c6b7e9cf coredns-674b8bbfcf-nvlch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliedbde4548fe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvlch" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-" Dec 12 17:42:09.007489 containerd[1929]: 2025-12-12 17:42:08.909 [INFO][5184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvlch" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-eth0" Dec 12 17:42:09.007489 containerd[1929]: 2025-12-12 17:42:08.944 [INFO][5207] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" HandleID="k8s-pod-network.9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-eth0" Dec 12 17:42:09.008097 containerd[1929]: 2025-12-12 17:42:08.944 [INFO][5207] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" HandleID="k8s-pod-network.9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-a-c1c6b7e9cf", "pod":"coredns-674b8bbfcf-nvlch", "timestamp":"2025-12-12 17:42:08.94416494 +0000 UTC"}, Hostname:"ci-4459.2.2-a-c1c6b7e9cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:42:09.008097 containerd[1929]: 2025-12-12 17:42:08.944 [INFO][5207] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:42:09.008097 containerd[1929]: 2025-12-12 17:42:08.944 [INFO][5207] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:42:09.008097 containerd[1929]: 2025-12-12 17:42:08.944 [INFO][5207] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-c1c6b7e9cf' Dec 12 17:42:09.008097 containerd[1929]: 2025-12-12 17:42:08.950 [INFO][5207] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.008097 containerd[1929]: 2025-12-12 17:42:08.955 [INFO][5207] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.008097 containerd[1929]: 2025-12-12 17:42:08.959 [INFO][5207] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.008097 containerd[1929]: 2025-12-12 17:42:08.960 [INFO][5207] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.008097 containerd[1929]: 2025-12-12 17:42:08.962 [INFO][5207] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.008303 containerd[1929]: 2025-12-12 17:42:08.962 [INFO][5207] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.008303 containerd[1929]: 2025-12-12 17:42:08.963 [INFO][5207] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e Dec 12 17:42:09.008303 containerd[1929]: 2025-12-12 17:42:08.970 [INFO][5207] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.008303 containerd[1929]: 2025-12-12 17:42:08.977 [INFO][5207] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.197/26] block=192.168.69.192/26 handle="k8s-pod-network.9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.008303 containerd[1929]: 2025-12-12 17:42:08.977 [INFO][5207] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.197/26] handle="k8s-pod-network.9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.008303 containerd[1929]: 2025-12-12 17:42:08.978 [INFO][5207] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:42:09.008303 containerd[1929]: 2025-12-12 17:42:08.978 [INFO][5207] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.197/26] IPv6=[] ContainerID="9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" HandleID="k8s-pod-network.9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-eth0" Dec 12 17:42:09.008400 containerd[1929]: 2025-12-12 17:42:08.980 [INFO][5184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvlch" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"98333ef4-de58-4330-a351-f7e736ae9923", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"", Pod:"coredns-674b8bbfcf-nvlch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedbde4548fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:09.008400 containerd[1929]: 2025-12-12 17:42:08.980 [INFO][5184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.197/32] ContainerID="9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvlch" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-eth0" Dec 12 17:42:09.008400 containerd[1929]: 2025-12-12 17:42:08.980 [INFO][5184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedbde4548fe ContainerID="9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvlch" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-eth0" Dec 12 17:42:09.008400 containerd[1929]: 2025-12-12 17:42:08.987 [INFO][5184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvlch" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-eth0" Dec 12 17:42:09.008400 containerd[1929]: 2025-12-12 17:42:08.987 [INFO][5184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvlch" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"98333ef4-de58-4330-a351-f7e736ae9923", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e", Pod:"coredns-674b8bbfcf-nvlch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedbde4548fe", MAC:"4e:a3:15:75:b3:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:09.008400 containerd[1929]: 2025-12-12 17:42:09.002 [INFO][5184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" Namespace="kube-system" Pod="coredns-674b8bbfcf-nvlch" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--nvlch-eth0" Dec 12 17:42:09.009075 kubelet[3460]: E1212 17:42:09.008599 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:42:09.121347 systemd-networkd[1491]: califb1b377731a: Link UP Dec 12 17:42:09.122000 systemd-networkd[1491]: califb1b377731a: Gained carrier Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:08.912 [INFO][5194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-eth0 coredns-674b8bbfcf- kube-system b5eb44f5-3e6b-434c-97bb-c1f43d5484ca 806 0 2025-12-12 17:41:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-a-c1c6b7e9cf coredns-674b8bbfcf-596m2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califb1b377731a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" Namespace="kube-system" Pod="coredns-674b8bbfcf-596m2" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-" Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:08.912 [INFO][5194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" Namespace="kube-system" Pod="coredns-674b8bbfcf-596m2" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-eth0" Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:08.946 [INFO][5209] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" HandleID="k8s-pod-network.b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-eth0" Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:08.947 [INFO][5209] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" HandleID="k8s-pod-network.b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-a-c1c6b7e9cf", "pod":"coredns-674b8bbfcf-596m2", "timestamp":"2025-12-12 17:42:08.946956218 +0000 UTC"}, Hostname:"ci-4459.2.2-a-c1c6b7e9cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:08.947 [INFO][5209] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:08.978 [INFO][5209] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:08.978 [INFO][5209] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-c1c6b7e9cf' Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:09.051 [INFO][5209] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:09.068 [INFO][5209] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:09.078 [INFO][5209] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:09.086 [INFO][5209] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:09.093 [INFO][5209] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:09.093 [INFO][5209] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:09.094 [INFO][5209] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997 Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:09.101 [INFO][5209] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:09.114 [INFO][5209] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.198/26] block=192.168.69.192/26 handle="k8s-pod-network.b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:09.114 [INFO][5209] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.198/26] handle="k8s-pod-network.b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:09.114 [INFO][5209] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:42:09.140055 containerd[1929]: 2025-12-12 17:42:09.114 [INFO][5209] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.198/26] IPv6=[] ContainerID="b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" HandleID="k8s-pod-network.b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-eth0" Dec 12 17:42:09.140446 containerd[1929]: 2025-12-12 17:42:09.117 [INFO][5194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" Namespace="kube-system" Pod="coredns-674b8bbfcf-596m2" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b5eb44f5-3e6b-434c-97bb-c1f43d5484ca", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"", Pod:"coredns-674b8bbfcf-596m2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb1b377731a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:09.140446 containerd[1929]: 2025-12-12 17:42:09.117 [INFO][5194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.198/32] ContainerID="b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" Namespace="kube-system" Pod="coredns-674b8bbfcf-596m2" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-eth0" Dec 12 17:42:09.140446 containerd[1929]: 2025-12-12 17:42:09.117 [INFO][5194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb1b377731a ContainerID="b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" Namespace="kube-system" Pod="coredns-674b8bbfcf-596m2" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-eth0" Dec 12 17:42:09.140446 containerd[1929]: 2025-12-12 17:42:09.123 [INFO][5194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" Namespace="kube-system" Pod="coredns-674b8bbfcf-596m2" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-eth0" Dec 12 17:42:09.140446 containerd[1929]: 2025-12-12 17:42:09.123 [INFO][5194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" Namespace="kube-system" Pod="coredns-674b8bbfcf-596m2" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b5eb44f5-3e6b-434c-97bb-c1f43d5484ca", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997", Pod:"coredns-674b8bbfcf-596m2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.69.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb1b377731a", MAC:"16:3b:1f:d1:ff:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:09.140446 containerd[1929]: 2025-12-12 17:42:09.137 [INFO][5194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" Namespace="kube-system" Pod="coredns-674b8bbfcf-596m2" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-coredns--674b8bbfcf--596m2-eth0" Dec 12 17:42:09.314654 containerd[1929]: time="2025-12-12T17:42:09.314562886Z" level=info msg="connecting to shim b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997" address="unix:///run/containerd/s/c9727852668e8d204c880bfbd6420f68ddb55433785548a18ce0c3309d9e1212" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:42:09.315920 containerd[1929]: time="2025-12-12T17:42:09.315888956Z" level=info msg="connecting to shim 9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e" address="unix:///run/containerd/s/44f1bfcfd61576487cd1d7f5f6f0ed8bbbd5bc37d2c32a97e681c0461facecb9" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:42:09.343389 systemd[1]: Started cri-containerd-9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e.scope - libcontainer container 9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e. Dec 12 17:42:09.350634 systemd[1]: Started cri-containerd-b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997.scope - libcontainer container b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997. Dec 12 17:42:09.385349 systemd-networkd[1491]: cali2072de51165: Gained IPv6LL Dec 12 17:42:09.398808 containerd[1929]: time="2025-12-12T17:42:09.398724988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-596m2,Uid:b5eb44f5-3e6b-434c-97bb-c1f43d5484ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997\"" Dec 12 17:42:09.402740 containerd[1929]: time="2025-12-12T17:42:09.402701564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nvlch,Uid:98333ef4-de58-4330-a351-f7e736ae9923,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e\"" Dec 12 17:42:09.410871 containerd[1929]: time="2025-12-12T17:42:09.410829216Z" level=info msg="CreateContainer within sandbox \"b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:42:09.416318 containerd[1929]: time="2025-12-12T17:42:09.416281777Z" level=info msg="CreateContainer within sandbox \"9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:42:09.445673 containerd[1929]: time="2025-12-12T17:42:09.445637466Z" level=info msg="Container a36a05a41d496d218e2856efd54a3139b922cfefbc9db08a4f61b6b51f6be7c2: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:42:09.454209 containerd[1929]: time="2025-12-12T17:42:09.454114998Z" level=info msg="Container e79d4c5a605d5fd0d0d11ebb0146e04caddaff7affdaed27ace6c919791f87b7: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:42:09.469138 containerd[1929]: time="2025-12-12T17:42:09.469104593Z" level=info msg="CreateContainer within sandbox \"b550f303185f49b18a4e3fad0b04dfbf434ca195d7f181c9b3d50f49e9145997\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a36a05a41d496d218e2856efd54a3139b922cfefbc9db08a4f61b6b51f6be7c2\"" Dec 12 17:42:09.469664 containerd[1929]: time="2025-12-12T17:42:09.469631125Z" level=info msg="StartContainer for \"a36a05a41d496d218e2856efd54a3139b922cfefbc9db08a4f61b6b51f6be7c2\"" Dec 12 17:42:09.470497 containerd[1929]: time="2025-12-12T17:42:09.470465503Z" level=info msg="connecting to shim a36a05a41d496d218e2856efd54a3139b922cfefbc9db08a4f61b6b51f6be7c2" address="unix:///run/containerd/s/c9727852668e8d204c880bfbd6420f68ddb55433785548a18ce0c3309d9e1212" protocol=ttrpc version=3 Dec 12 17:42:09.472324 containerd[1929]: time="2025-12-12T17:42:09.472256783Z" level=info msg="CreateContainer within sandbox \"9bcb87a13e46ae325e655f66deec54226868f4801aa77ed738233c7b6639608e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e79d4c5a605d5fd0d0d11ebb0146e04caddaff7affdaed27ace6c919791f87b7\"" Dec 12 17:42:09.473338 containerd[1929]: time="2025-12-12T17:42:09.473307414Z" level=info msg="StartContainer for \"e79d4c5a605d5fd0d0d11ebb0146e04caddaff7affdaed27ace6c919791f87b7\"" Dec 12 17:42:09.474329 containerd[1929]: time="2025-12-12T17:42:09.474306388Z" level=info msg="connecting to shim e79d4c5a605d5fd0d0d11ebb0146e04caddaff7affdaed27ace6c919791f87b7" address="unix:///run/containerd/s/44f1bfcfd61576487cd1d7f5f6f0ed8bbbd5bc37d2c32a97e681c0461facecb9" protocol=ttrpc version=3 Dec 12 17:42:09.495411 systemd[1]: Started cri-containerd-e79d4c5a605d5fd0d0d11ebb0146e04caddaff7affdaed27ace6c919791f87b7.scope - libcontainer container e79d4c5a605d5fd0d0d11ebb0146e04caddaff7affdaed27ace6c919791f87b7. Dec 12 17:42:09.499067 systemd[1]: Started cri-containerd-a36a05a41d496d218e2856efd54a3139b922cfefbc9db08a4f61b6b51f6be7c2.scope - libcontainer container a36a05a41d496d218e2856efd54a3139b922cfefbc9db08a4f61b6b51f6be7c2. Dec 12 17:42:09.544718 containerd[1929]: time="2025-12-12T17:42:09.544696146Z" level=info msg="StartContainer for \"e79d4c5a605d5fd0d0d11ebb0146e04caddaff7affdaed27ace6c919791f87b7\" returns successfully" Dec 12 17:42:09.544967 containerd[1929]: time="2025-12-12T17:42:09.544901998Z" level=info msg="StartContainer for \"a36a05a41d496d218e2856efd54a3139b922cfefbc9db08a4f61b6b51f6be7c2\" returns successfully" Dec 12 17:42:09.641347 systemd-networkd[1491]: vxlan.calico: Gained IPv6LL Dec 12 17:42:09.852708 containerd[1929]: time="2025-12-12T17:42:09.852619438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cpr2l,Uid:87bb8c15-a7ba-4def-b41d-8b2220421e40,Namespace:calico-system,Attempt:0,}" Dec 12 17:42:09.950900 systemd-networkd[1491]: cali8654eb086b4: Link UP Dec 12 17:42:09.953423 systemd-networkd[1491]: cali8654eb086b4: Gained carrier Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.889 [INFO][5405] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-eth0 goldmane-666569f655- calico-system 87bb8c15-a7ba-4def-b41d-8b2220421e40 813 0 2025-12-12 17:41:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.2-a-c1c6b7e9cf goldmane-666569f655-cpr2l eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8654eb086b4 [] [] }} ContainerID="3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" Namespace="calico-system" Pod="goldmane-666569f655-cpr2l" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-" Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.889 [INFO][5405] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" Namespace="calico-system" Pod="goldmane-666569f655-cpr2l" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-eth0" Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.906 [INFO][5416] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" HandleID="k8s-pod-network.3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-eth0" Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.906 [INFO][5416] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" HandleID="k8s-pod-network.3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-c1c6b7e9cf", "pod":"goldmane-666569f655-cpr2l", "timestamp":"2025-12-12 17:42:09.906556664 +0000 UTC"}, Hostname:"ci-4459.2.2-a-c1c6b7e9cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.906 [INFO][5416] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.906 [INFO][5416] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.906 [INFO][5416] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-c1c6b7e9cf' Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.914 [INFO][5416] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.919 [INFO][5416] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.924 [INFO][5416] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.926 [INFO][5416] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.929 [INFO][5416] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.929 [INFO][5416] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.931 [INFO][5416] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.935 [INFO][5416] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.945 [INFO][5416] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.199/26] block=192.168.69.192/26 handle="k8s-pod-network.3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.945 [INFO][5416] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.199/26] handle="k8s-pod-network.3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.945 [INFO][5416] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:42:09.970517 containerd[1929]: 2025-12-12 17:42:09.945 [INFO][5416] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.199/26] IPv6=[] ContainerID="3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" HandleID="k8s-pod-network.3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-eth0" Dec 12 17:42:09.970939 containerd[1929]: 2025-12-12 17:42:09.947 [INFO][5405] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" Namespace="calico-system" Pod="goldmane-666569f655-cpr2l" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"87bb8c15-a7ba-4def-b41d-8b2220421e40", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"", Pod:"goldmane-666569f655-cpr2l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8654eb086b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:09.970939 containerd[1929]: 2025-12-12 17:42:09.947 [INFO][5405] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.199/32] ContainerID="3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" Namespace="calico-system" Pod="goldmane-666569f655-cpr2l" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-eth0" Dec 12 17:42:09.970939 containerd[1929]: 2025-12-12 17:42:09.947 [INFO][5405] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8654eb086b4 ContainerID="3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" Namespace="calico-system" Pod="goldmane-666569f655-cpr2l" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-eth0" Dec 12 17:42:09.970939 containerd[1929]: 2025-12-12 17:42:09.955 [INFO][5405] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" Namespace="calico-system" Pod="goldmane-666569f655-cpr2l" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-eth0" Dec 12 17:42:09.970939 containerd[1929]: 2025-12-12 17:42:09.955 [INFO][5405] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" Namespace="calico-system" Pod="goldmane-666569f655-cpr2l" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"87bb8c15-a7ba-4def-b41d-8b2220421e40", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a", Pod:"goldmane-666569f655-cpr2l", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.69.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8654eb086b4", MAC:"2e:41:29:4d:b2:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:09.970939 containerd[1929]: 2025-12-12 17:42:09.968 [INFO][5405] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" Namespace="calico-system" Pod="goldmane-666569f655-cpr2l" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-goldmane--666569f655--cpr2l-eth0" Dec 12 17:42:10.009580 kubelet[3460]: E1212 17:42:10.009483 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" podUID="a4012a8d-5664-448e-9dfd-51493bbdec98" Dec 12 17:42:10.030222 systemd-networkd[1491]: caliedbde4548fe: Gained IPv6LL Dec 12 17:42:10.035805 kubelet[3460]: I1212 17:42:10.033931 3460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-596m2" podStartSLOduration=39.033919025 podStartE2EDuration="39.033919025s" podCreationTimestamp="2025-12-12 17:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:42:10.033362941 +0000 UTC m=+44.268780572" watchObservedRunningTime="2025-12-12 17:42:10.033919025 +0000 UTC m=+44.269336656" Dec 12 17:42:10.039617 containerd[1929]: time="2025-12-12T17:42:10.039583511Z" level=info msg="connecting to shim 3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a" address="unix:///run/containerd/s/01f66f3de9a0fcc0591768570dbd4c692fcd09999d97cb804f75b4f99961a522" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:42:10.073312 systemd[1]: Started cri-containerd-3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a.scope - libcontainer container 3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a. Dec 12 17:42:10.094999 kubelet[3460]: I1212 17:42:10.094954 3460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nvlch" podStartSLOduration=39.094940519 podStartE2EDuration="39.094940519s" podCreationTimestamp="2025-12-12 17:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:42:10.072529999 +0000 UTC m=+44.307947638" watchObservedRunningTime="2025-12-12 17:42:10.094940519 +0000 UTC m=+44.330358158" Dec 12 17:42:10.121368 containerd[1929]: time="2025-12-12T17:42:10.121332655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-cpr2l,Uid:87bb8c15-a7ba-4def-b41d-8b2220421e40,Namespace:calico-system,Attempt:0,} returns sandbox id \"3181a14f99667e43ccd6b0ed1ca8673d7feffff983a78dd500e7cbaac2d05c0a\"" Dec 12 17:42:10.124570 containerd[1929]: time="2025-12-12T17:42:10.124545150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:42:10.376059 containerd[1929]: time="2025-12-12T17:42:10.375913360Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:10.379273 containerd[1929]: time="2025-12-12T17:42:10.379244170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:42:10.379334 containerd[1929]: time="2025-12-12T17:42:10.379313483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:42:10.379559 kubelet[3460]: E1212 17:42:10.379464 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:42:10.379559 kubelet[3460]: E1212 17:42:10.379520 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:42:10.379809 kubelet[3460]: E1212 17:42:10.379740 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9gr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cpr2l_calico-system(87bb8c15-a7ba-4def-b41d-8b2220421e40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:10.381075 kubelet[3460]: E1212 17:42:10.380972 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:42:10.537346 systemd-networkd[1491]: califb1b377731a: Gained IPv6LL Dec 12 17:42:10.853618 containerd[1929]: time="2025-12-12T17:42:10.853576096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbq92,Uid:af9257cb-fecf-4ff2-8249-41f13bb32168,Namespace:calico-system,Attempt:0,}" Dec 12 17:42:10.943396 systemd-networkd[1491]: calib20a8455b64: Link UP Dec 12 17:42:10.943932 systemd-networkd[1491]: calib20a8455b64: Gained carrier Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.886 [INFO][5482] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-eth0 csi-node-driver- calico-system af9257cb-fecf-4ff2-8249-41f13bb32168 707 0 2025-12-12 17:41:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.2-a-c1c6b7e9cf csi-node-driver-tbq92 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib20a8455b64 [] [] }} ContainerID="dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" Namespace="calico-system" Pod="csi-node-driver-tbq92" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-" Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.886 [INFO][5482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" Namespace="calico-system" Pod="csi-node-driver-tbq92" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-eth0" Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.902 [INFO][5494] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" HandleID="k8s-pod-network.dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-eth0" Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.903 [INFO][5494] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" HandleID="k8s-pod-network.dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-c1c6b7e9cf", "pod":"csi-node-driver-tbq92", "timestamp":"2025-12-12 17:42:10.902931235 +0000 UTC"}, Hostname:"ci-4459.2.2-a-c1c6b7e9cf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.903 [INFO][5494] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.903 [INFO][5494] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.903 [INFO][5494] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-c1c6b7e9cf' Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.908 [INFO][5494] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.913 [INFO][5494] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.917 [INFO][5494] ipam/ipam.go 511: Trying affinity for 192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.918 [INFO][5494] ipam/ipam.go 158: Attempting to load block cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.920 [INFO][5494] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.69.192/26 host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.920 [INFO][5494] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.69.192/26 handle="k8s-pod-network.dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.922 [INFO][5494] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.928 [INFO][5494] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.69.192/26 handle="k8s-pod-network.dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.939 [INFO][5494] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.69.200/26] block=192.168.69.192/26 handle="k8s-pod-network.dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.939 [INFO][5494] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.69.200/26] handle="k8s-pod-network.dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" host="ci-4459.2.2-a-c1c6b7e9cf" Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.939 [INFO][5494] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:42:10.964298 containerd[1929]: 2025-12-12 17:42:10.939 [INFO][5494] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.69.200/26] IPv6=[] ContainerID="dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" HandleID="k8s-pod-network.dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" Workload="ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-eth0" Dec 12 17:42:10.965640 containerd[1929]: 2025-12-12 17:42:10.941 [INFO][5482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" Namespace="calico-system" Pod="csi-node-driver-tbq92" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"af9257cb-fecf-4ff2-8249-41f13bb32168", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"", Pod:"csi-node-driver-tbq92", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib20a8455b64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:10.965640 containerd[1929]: 2025-12-12 17:42:10.941 [INFO][5482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.69.200/32] ContainerID="dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" Namespace="calico-system" Pod="csi-node-driver-tbq92" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-eth0" Dec 12 17:42:10.965640 containerd[1929]: 2025-12-12 17:42:10.941 [INFO][5482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib20a8455b64 ContainerID="dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" Namespace="calico-system" Pod="csi-node-driver-tbq92" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-eth0" Dec 12 17:42:10.965640 containerd[1929]: 2025-12-12 17:42:10.945 [INFO][5482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" Namespace="calico-system" Pod="csi-node-driver-tbq92" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-eth0" Dec 12 17:42:10.965640 containerd[1929]: 2025-12-12 17:42:10.946 [INFO][5482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" Namespace="calico-system" Pod="csi-node-driver-tbq92" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"af9257cb-fecf-4ff2-8249-41f13bb32168", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 41, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-c1c6b7e9cf", ContainerID:"dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee", Pod:"csi-node-driver-tbq92", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.69.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib20a8455b64", MAC:"7a:0b:45:f4:54:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:42:10.965640 containerd[1929]: 2025-12-12 17:42:10.962 [INFO][5482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" Namespace="calico-system" Pod="csi-node-driver-tbq92" WorkloadEndpoint="ci--4459.2.2--a--c1c6b7e9cf-k8s-csi--node--driver--tbq92-eth0" Dec 12 17:42:11.007497 containerd[1929]: time="2025-12-12T17:42:11.007268064Z" level=info msg="connecting to shim dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee" address="unix:///run/containerd/s/2652e397e1639f522eaa110329145bb559e7ba9ff66fafd2553305b1fa97570f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:42:11.024534 kubelet[3460]: E1212 17:42:11.023761 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:42:11.039294 systemd[1]: Started cri-containerd-dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee.scope - libcontainer container dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee. Dec 12 17:42:11.067088 containerd[1929]: time="2025-12-12T17:42:11.067054106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbq92,Uid:af9257cb-fecf-4ff2-8249-41f13bb32168,Namespace:calico-system,Attempt:0,} returns sandbox id \"dee9bea188640ec2765d06b9f501ff8fba645a856ea44203135343a79abe73ee\"" Dec 12 17:42:11.068452 containerd[1929]: time="2025-12-12T17:42:11.068423385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:42:11.337156 containerd[1929]: time="2025-12-12T17:42:11.337068048Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:11.340761 containerd[1929]: time="2025-12-12T17:42:11.340727113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:42:11.340761 containerd[1929]: time="2025-12-12T17:42:11.340779778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:42:11.340941 kubelet[3460]: E1212 17:42:11.340902 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:42:11.340987 kubelet[3460]: E1212 17:42:11.340946 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:42:11.341221 kubelet[3460]: E1212 17:42:11.341062 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwcqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbq92_calico-system(af9257cb-fecf-4ff2-8249-41f13bb32168): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:11.343196 containerd[1929]: time="2025-12-12T17:42:11.343115022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:42:11.610668 containerd[1929]: time="2025-12-12T17:42:11.610469448Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:11.614512 containerd[1929]: time="2025-12-12T17:42:11.614431944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:42:11.614512 containerd[1929]: time="2025-12-12T17:42:11.614492473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:42:11.614655 kubelet[3460]: E1212 17:42:11.614607 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:42:11.614810 kubelet[3460]: E1212 17:42:11.614657 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:42:11.614810 kubelet[3460]: E1212 17:42:11.614754 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwcqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbq92_calico-system(af9257cb-fecf-4ff2-8249-41f13bb32168): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:11.615943 kubelet[3460]: E1212 17:42:11.615907 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:42:11.881304 systemd-networkd[1491]: cali8654eb086b4: Gained IPv6LL Dec 12 17:42:12.026766 kubelet[3460]: E1212 17:42:12.026727 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:42:12.027995 kubelet[3460]: E1212 17:42:12.027664 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:42:12.265328 systemd-networkd[1491]: calib20a8455b64: Gained IPv6LL Dec 12 17:42:13.029085 kubelet[3460]: E1212 17:42:13.028930 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:42:15.854610 containerd[1929]: time="2025-12-12T17:42:15.854405635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:42:16.136219 containerd[1929]: time="2025-12-12T17:42:16.135955876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:16.139872 containerd[1929]: time="2025-12-12T17:42:16.139765476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:42:16.139872 containerd[1929]: time="2025-12-12T17:42:16.139830486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:42:16.140555 kubelet[3460]: E1212 17:42:16.140012 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:42:16.140555 kubelet[3460]: E1212 17:42:16.140060 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:42:16.140555 kubelet[3460]: E1212 17:42:16.140207 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2525d8452e6e40b4b30dca31db45508c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vw67r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67ffbcfc86-jjn58_calico-system(99d456ff-41e9-43a8-8405-1dee55d2f1c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:16.143225 containerd[1929]: time="2025-12-12T17:42:16.143195340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:42:16.404385 containerd[1929]: time="2025-12-12T17:42:16.404233953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:16.407615 containerd[1929]: time="2025-12-12T17:42:16.407543806Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:42:16.407615 containerd[1929]: time="2025-12-12T17:42:16.407582222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:42:16.407780 kubelet[3460]: E1212 17:42:16.407741 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:42:16.407825 kubelet[3460]: E1212 17:42:16.407790 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:42:16.407961 kubelet[3460]: E1212 17:42:16.407899 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vw67r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67ffbcfc86-jjn58_calico-system(99d456ff-41e9-43a8-8405-1dee55d2f1c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:16.409359 kubelet[3460]: E1212 17:42:16.409324 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67ffbcfc86-jjn58" podUID="99d456ff-41e9-43a8-8405-1dee55d2f1c2" Dec 12 17:42:20.856890 containerd[1929]: time="2025-12-12T17:42:20.856853041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:42:21.146596 containerd[1929]: time="2025-12-12T17:42:21.146463148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:21.153783 containerd[1929]: time="2025-12-12T17:42:21.153732742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:42:21.153865 containerd[1929]: time="2025-12-12T17:42:21.153808848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:42:21.154009 kubelet[3460]: E1212 17:42:21.153966 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:42:21.154611 kubelet[3460]: E1212 17:42:21.154019 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:42:21.154611 kubelet[3460]: E1212 17:42:21.154306 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgzcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c586fd54-zk6dm_calico-apiserver(ca5ba48c-4a22-4538-aeda-0de712e65e58): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:21.154719 containerd[1929]: time="2025-12-12T17:42:21.154268362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:42:21.155799 kubelet[3460]: E1212 17:42:21.155770 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" podUID="ca5ba48c-4a22-4538-aeda-0de712e65e58" Dec 12 17:42:21.417061 containerd[1929]: time="2025-12-12T17:42:21.416707029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:21.419999 containerd[1929]: time="2025-12-12T17:42:21.419957550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:42:21.420166 containerd[1929]: time="2025-12-12T17:42:21.420041863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:42:21.420220 kubelet[3460]: E1212 17:42:21.420162 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:42:21.420220 kubelet[3460]: E1212 17:42:21.420210 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:42:21.420416 kubelet[3460]: E1212 17:42:21.420316 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvllc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c586fd54-8vxtr_calico-apiserver(a4012a8d-5664-448e-9dfd-51493bbdec98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:21.422244 kubelet[3460]: E1212 17:42:21.422218 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" podUID="a4012a8d-5664-448e-9dfd-51493bbdec98" Dec 12 17:42:22.853211 containerd[1929]: time="2025-12-12T17:42:22.853157046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:42:23.134645 containerd[1929]: time="2025-12-12T17:42:23.134513911Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:23.138063 containerd[1929]: time="2025-12-12T17:42:23.138023014Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:42:23.138063 containerd[1929]: time="2025-12-12T17:42:23.138082047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:42:23.138356 kubelet[3460]: E1212 17:42:23.138319 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:42:23.138881 kubelet[3460]: E1212 17:42:23.138409 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:42:23.138881 kubelet[3460]: E1212 17:42:23.138542 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9gr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cpr2l_calico-system(87bb8c15-a7ba-4def-b41d-8b2220421e40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:23.139877 kubelet[3460]: E1212 17:42:23.139847 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:42:23.855196 containerd[1929]: time="2025-12-12T17:42:23.854962682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:42:24.133708 containerd[1929]: time="2025-12-12T17:42:24.133449524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:24.136838 containerd[1929]: time="2025-12-12T17:42:24.136802575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:42:24.136912 containerd[1929]: time="2025-12-12T17:42:24.136880904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:42:24.137094 kubelet[3460]: E1212 17:42:24.137032 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:42:24.137147 kubelet[3460]: E1212 17:42:24.137105 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:42:24.137559 kubelet[3460]: E1212 17:42:24.137246 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ltq49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f5b48977b-ckrpk_calico-system(174ab840-5c2d-412d-ae9d-b5c5ec64e11d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:24.138458 kubelet[3460]: E1212 17:42:24.138427 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:42:25.853500 containerd[1929]: time="2025-12-12T17:42:25.853414543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:42:26.141461 containerd[1929]: time="2025-12-12T17:42:26.141330956Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:26.145223 containerd[1929]: time="2025-12-12T17:42:26.145159114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:42:26.145381 containerd[1929]: time="2025-12-12T17:42:26.145190203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:42:26.145589 kubelet[3460]: E1212 17:42:26.145551 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:42:26.145884 kubelet[3460]: E1212 17:42:26.145601 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:42:26.145884 kubelet[3460]: E1212 17:42:26.145711 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwcqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbq92_calico-system(af9257cb-fecf-4ff2-8249-41f13bb32168): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:26.148686 containerd[1929]: time="2025-12-12T17:42:26.148445371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:42:26.439291 containerd[1929]: time="2025-12-12T17:42:26.439105557Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:26.442942 containerd[1929]: time="2025-12-12T17:42:26.442849233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:42:26.442942 containerd[1929]: time="2025-12-12T17:42:26.442904250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:42:26.443097 kubelet[3460]: E1212 17:42:26.443053 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:42:26.443157 kubelet[3460]: E1212 17:42:26.443103 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:42:26.443593 kubelet[3460]: E1212 17:42:26.443247 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwcqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbq92_calico-system(af9257cb-fecf-4ff2-8249-41f13bb32168): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:26.445320 kubelet[3460]: E1212 17:42:26.445291 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:42:28.854498 kubelet[3460]: E1212 17:42:28.853948 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67ffbcfc86-jjn58" podUID="99d456ff-41e9-43a8-8405-1dee55d2f1c2" Dec 12 17:42:34.853849 kubelet[3460]: E1212 17:42:34.853777 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" podUID="a4012a8d-5664-448e-9dfd-51493bbdec98" Dec 12 17:42:34.855030 kubelet[3460]: E1212 17:42:34.855001 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:42:35.858194 kubelet[3460]: E1212 17:42:35.856610 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" podUID="ca5ba48c-4a22-4538-aeda-0de712e65e58" Dec 12 17:42:38.853510 kubelet[3460]: E1212 17:42:38.853463 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:42:38.854485 kubelet[3460]: E1212 17:42:38.854445 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:42:40.854828 containerd[1929]: time="2025-12-12T17:42:40.854743953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:42:41.125053 containerd[1929]: time="2025-12-12T17:42:41.124777704Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:41.128384 containerd[1929]: time="2025-12-12T17:42:41.128342028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:42:41.128471 containerd[1929]: time="2025-12-12T17:42:41.128410262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:42:41.128618 kubelet[3460]: E1212 17:42:41.128573 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:42:41.129060 kubelet[3460]: E1212 17:42:41.128902 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:42:41.129060 kubelet[3460]: E1212 17:42:41.129019 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2525d8452e6e40b4b30dca31db45508c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vw67r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67ffbcfc86-jjn58_calico-system(99d456ff-41e9-43a8-8405-1dee55d2f1c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:41.131741 containerd[1929]: time="2025-12-12T17:42:41.131713565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:42:41.420729 containerd[1929]: time="2025-12-12T17:42:41.420423173Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:41.423781 containerd[1929]: time="2025-12-12T17:42:41.423741300Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:42:41.423857 containerd[1929]: time="2025-12-12T17:42:41.423840190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:42:41.424061 kubelet[3460]: E1212 17:42:41.424004 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:42:41.424187 kubelet[3460]: E1212 17:42:41.424154 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:42:41.424737 kubelet[3460]: E1212 17:42:41.424383 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vw67r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67ffbcfc86-jjn58_calico-system(99d456ff-41e9-43a8-8405-1dee55d2f1c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:41.425859 kubelet[3460]: E1212 17:42:41.425826 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67ffbcfc86-jjn58" podUID="99d456ff-41e9-43a8-8405-1dee55d2f1c2" Dec 12 17:42:46.854094 containerd[1929]: time="2025-12-12T17:42:46.854058526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:42:47.103540 containerd[1929]: time="2025-12-12T17:42:47.103340205Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:47.109046 containerd[1929]: time="2025-12-12T17:42:47.108891175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:42:47.109046 containerd[1929]: time="2025-12-12T17:42:47.108961824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:42:47.109289 kubelet[3460]: E1212 17:42:47.109258 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:42:47.109992 kubelet[3460]: E1212 17:42:47.109335 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:42:47.109992 kubelet[3460]: E1212 17:42:47.109543 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9gr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cpr2l_calico-system(87bb8c15-a7ba-4def-b41d-8b2220421e40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:47.110292 containerd[1929]: time="2025-12-12T17:42:47.109984419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:42:47.110848 kubelet[3460]: E1212 17:42:47.110822 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:42:47.360063 containerd[1929]: time="2025-12-12T17:42:47.359447054Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:47.378616 containerd[1929]: time="2025-12-12T17:42:47.378504616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:42:47.378616 containerd[1929]: time="2025-12-12T17:42:47.378592018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:42:47.378886 kubelet[3460]: E1212 17:42:47.378846 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:42:47.379153 kubelet[3460]: E1212 17:42:47.378993 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:42:47.379838 kubelet[3460]: E1212 17:42:47.379773 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvllc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c586fd54-8vxtr_calico-apiserver(a4012a8d-5664-448e-9dfd-51493bbdec98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:47.381001 kubelet[3460]: E1212 17:42:47.380965 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" podUID="a4012a8d-5664-448e-9dfd-51493bbdec98" Dec 12 17:42:48.854903 containerd[1929]: time="2025-12-12T17:42:48.854855709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:42:49.110539 containerd[1929]: time="2025-12-12T17:42:49.110413654Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:49.114140 containerd[1929]: time="2025-12-12T17:42:49.114103529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:42:49.114217 containerd[1929]: time="2025-12-12T17:42:49.114179395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:42:49.114424 kubelet[3460]: E1212 17:42:49.114359 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:42:49.114424 kubelet[3460]: E1212 17:42:49.114397 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:42:49.114920 kubelet[3460]: E1212 17:42:49.114871 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgzcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c586fd54-zk6dm_calico-apiserver(ca5ba48c-4a22-4538-aeda-0de712e65e58): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:49.116039 kubelet[3460]: E1212 17:42:49.116001 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" podUID="ca5ba48c-4a22-4538-aeda-0de712e65e58" Dec 12 17:42:51.854104 containerd[1929]: time="2025-12-12T17:42:51.854013654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:42:52.096932 containerd[1929]: time="2025-12-12T17:42:52.096878752Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:52.101403 containerd[1929]: time="2025-12-12T17:42:52.101364061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:42:52.101539 containerd[1929]: time="2025-12-12T17:42:52.101426966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:42:52.101590 kubelet[3460]: E1212 17:42:52.101528 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:42:52.101590 kubelet[3460]: E1212 17:42:52.101565 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:42:52.101973 kubelet[3460]: E1212 17:42:52.101671 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwcqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbq92_calico-system(af9257cb-fecf-4ff2-8249-41f13bb32168): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:52.103740 containerd[1929]: time="2025-12-12T17:42:52.103710298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:42:52.410454 containerd[1929]: time="2025-12-12T17:42:52.410276486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:52.414070 containerd[1929]: time="2025-12-12T17:42:52.413965401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:42:52.414070 containerd[1929]: time="2025-12-12T17:42:52.413977209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:42:52.414298 kubelet[3460]: E1212 17:42:52.414164 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:42:52.414352 kubelet[3460]: E1212 17:42:52.414311 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:42:52.414460 kubelet[3460]: E1212 17:42:52.414419 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwcqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbq92_calico-system(af9257cb-fecf-4ff2-8249-41f13bb32168): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:52.422321 kubelet[3460]: E1212 17:42:52.416473 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:42:52.855430 containerd[1929]: time="2025-12-12T17:42:52.855319624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:42:53.129920 containerd[1929]: time="2025-12-12T17:42:53.129409684Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:42:53.135397 containerd[1929]: time="2025-12-12T17:42:53.135295521Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:42:53.135397 containerd[1929]: time="2025-12-12T17:42:53.135374963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:42:53.135665 kubelet[3460]: E1212 17:42:53.135614 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:42:53.135665 kubelet[3460]: E1212 17:42:53.135671 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:42:53.136002 kubelet[3460]: E1212 17:42:53.135782 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ltq49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f5b48977b-ckrpk_calico-system(174ab840-5c2d-412d-ae9d-b5c5ec64e11d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:42:53.137230 kubelet[3460]: E1212 17:42:53.137202 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:42:53.856760 kubelet[3460]: E1212 17:42:53.856711 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67ffbcfc86-jjn58" podUID="99d456ff-41e9-43a8-8405-1dee55d2f1c2" Dec 12 17:42:59.854340 kubelet[3460]: E1212 17:42:59.853522 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" podUID="a4012a8d-5664-448e-9dfd-51493bbdec98" Dec 12 17:43:01.854495 kubelet[3460]: E1212 17:43:01.854322 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:43:03.854136 kubelet[3460]: E1212 17:43:03.854039 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" podUID="ca5ba48c-4a22-4538-aeda-0de712e65e58" Dec 12 17:43:03.854136 kubelet[3460]: E1212 17:43:03.854053 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:43:07.854929 kubelet[3460]: E1212 17:43:07.854402 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:43:08.853643 kubelet[3460]: E1212 17:43:08.853604 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67ffbcfc86-jjn58" podUID="99d456ff-41e9-43a8-8405-1dee55d2f1c2" Dec 12 17:43:12.330702 systemd[1]: Started sshd@7-10.200.20.14:22-10.200.16.10:45156.service - OpenSSH per-connection server daemon (10.200.16.10:45156). Dec 12 17:43:12.829224 sshd[5656]: Accepted publickey for core from 10.200.16.10 port 45156 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:43:12.831007 sshd-session[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:12.838249 systemd-logind[1864]: New session 10 of user core. Dec 12 17:43:12.842323 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:43:12.852702 kubelet[3460]: E1212 17:43:12.852665 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:43:13.338717 sshd[5659]: Connection closed by 10.200.16.10 port 45156 Dec 12 17:43:13.338632 sshd-session[5656]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:13.343918 systemd[1]: sshd@7-10.200.20.14:22-10.200.16.10:45156.service: Deactivated successfully. Dec 12 17:43:13.343951 systemd-logind[1864]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:43:13.346810 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:43:13.350293 systemd-logind[1864]: Removed session 10. Dec 12 17:43:14.853532 kubelet[3460]: E1212 17:43:14.853476 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" podUID="a4012a8d-5664-448e-9dfd-51493bbdec98" Dec 12 17:43:15.855983 kubelet[3460]: E1212 17:43:15.855927 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:43:16.855287 kubelet[3460]: E1212 17:43:16.855244 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" podUID="ca5ba48c-4a22-4538-aeda-0de712e65e58" Dec 12 17:43:18.430494 systemd[1]: Started sshd@8-10.200.20.14:22-10.200.16.10:45162.service - OpenSSH per-connection server daemon (10.200.16.10:45162). Dec 12 17:43:18.927768 sshd[5673]: Accepted publickey for core from 10.200.16.10 port 45162 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:43:18.929286 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:18.934510 systemd-logind[1864]: New session 11 of user core. Dec 12 17:43:18.937478 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:43:19.373155 sshd[5676]: Connection closed by 10.200.16.10 port 45162 Dec 12 17:43:19.394724 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:19.398640 systemd[1]: sshd@8-10.200.20.14:22-10.200.16.10:45162.service: Deactivated successfully. Dec 12 17:43:19.402782 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:43:19.403984 systemd-logind[1864]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:43:19.410946 systemd-logind[1864]: Removed session 11. Dec 12 17:43:19.855110 kubelet[3460]: E1212 17:43:19.854979 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67ffbcfc86-jjn58" podUID="99d456ff-41e9-43a8-8405-1dee55d2f1c2" Dec 12 17:43:19.855110 kubelet[3460]: E1212 17:43:19.855080 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:43:24.461582 systemd[1]: Started sshd@9-10.200.20.14:22-10.200.16.10:51516.service - OpenSSH per-connection server daemon (10.200.16.10:51516). Dec 12 17:43:24.957167 sshd[5691]: Accepted publickey for core from 10.200.16.10 port 51516 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:43:24.958103 sshd-session[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:24.962106 systemd-logind[1864]: New session 12 of user core. Dec 12 17:43:24.971462 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:43:25.384545 sshd[5694]: Connection closed by 10.200.16.10 port 51516 Dec 12 17:43:25.385159 sshd-session[5691]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:25.388707 systemd-logind[1864]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:43:25.389373 systemd[1]: sshd@9-10.200.20.14:22-10.200.16.10:51516.service: Deactivated successfully. Dec 12 17:43:25.391075 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:43:25.393181 systemd-logind[1864]: Removed session 12. Dec 12 17:43:25.475378 systemd[1]: Started sshd@10-10.200.20.14:22-10.200.16.10:51520.service - OpenSSH per-connection server daemon (10.200.16.10:51520). Dec 12 17:43:25.854030 kubelet[3460]: E1212 17:43:25.853835 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:43:25.969999 sshd[5707]: Accepted publickey for core from 10.200.16.10 port 51520 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:43:25.971116 sshd-session[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:25.975139 systemd-logind[1864]: New session 13 of user core. Dec 12 17:43:25.980309 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:43:26.395696 sshd[5712]: Connection closed by 10.200.16.10 port 51520 Dec 12 17:43:26.395606 sshd-session[5707]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:26.400451 systemd[1]: sshd@10-10.200.20.14:22-10.200.16.10:51520.service: Deactivated successfully. Dec 12 17:43:26.403847 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:43:26.404980 systemd-logind[1864]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:43:26.406348 systemd-logind[1864]: Removed session 13. Dec 12 17:43:26.476768 systemd[1]: Started sshd@11-10.200.20.14:22-10.200.16.10:51532.service - OpenSSH per-connection server daemon (10.200.16.10:51532). Dec 12 17:43:26.935204 sshd[5722]: Accepted publickey for core from 10.200.16.10 port 51532 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:43:26.936757 sshd-session[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:26.942438 systemd-logind[1864]: New session 14 of user core. Dec 12 17:43:26.957501 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:43:27.355355 sshd[5725]: Connection closed by 10.200.16.10 port 51532 Dec 12 17:43:27.354696 sshd-session[5722]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:27.359073 systemd-logind[1864]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:43:27.359849 systemd[1]: sshd@11-10.200.20.14:22-10.200.16.10:51532.service: Deactivated successfully. Dec 12 17:43:27.361967 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:43:27.363619 systemd-logind[1864]: Removed session 14. Dec 12 17:43:27.855129 containerd[1929]: time="2025-12-12T17:43:27.854820649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:43:28.134898 containerd[1929]: time="2025-12-12T17:43:28.134517727Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:43:28.138328 containerd[1929]: time="2025-12-12T17:43:28.138222273Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:43:28.138328 containerd[1929]: time="2025-12-12T17:43:28.138302635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:43:28.138509 kubelet[3460]: E1212 17:43:28.138439 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:43:28.139004 kubelet[3460]: E1212 17:43:28.138509 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:43:28.139004 kubelet[3460]: E1212 17:43:28.138624 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvllc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c586fd54-8vxtr_calico-apiserver(a4012a8d-5664-448e-9dfd-51493bbdec98): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:43:28.139772 kubelet[3460]: E1212 17:43:28.139747 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" podUID="a4012a8d-5664-448e-9dfd-51493bbdec98" Dec 12 17:43:29.854077 containerd[1929]: time="2025-12-12T17:43:29.854032071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:43:30.132811 containerd[1929]: time="2025-12-12T17:43:30.132434328Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:43:30.136267 containerd[1929]: time="2025-12-12T17:43:30.136152035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:43:30.136267 containerd[1929]: time="2025-12-12T17:43:30.136191932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:43:30.136547 kubelet[3460]: E1212 17:43:30.136496 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:43:30.136845 kubelet[3460]: E1212 17:43:30.136560 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:43:30.136845 kubelet[3460]: E1212 17:43:30.136675 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgzcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c586fd54-zk6dm_calico-apiserver(ca5ba48c-4a22-4538-aeda-0de712e65e58): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:43:30.138124 kubelet[3460]: E1212 17:43:30.138095 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" podUID="ca5ba48c-4a22-4538-aeda-0de712e65e58" Dec 12 17:43:30.853191 kubelet[3460]: E1212 17:43:30.852980 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:43:30.853477 containerd[1929]: time="2025-12-12T17:43:30.853166107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:43:31.129983 containerd[1929]: time="2025-12-12T17:43:31.129855846Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:43:31.133335 containerd[1929]: time="2025-12-12T17:43:31.133297811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:43:31.133419 containerd[1929]: time="2025-12-12T17:43:31.133371900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:43:31.133747 kubelet[3460]: E1212 17:43:31.133554 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:43:31.133747 kubelet[3460]: E1212 17:43:31.133612 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:43:31.133747 kubelet[3460]: E1212 17:43:31.133720 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2525d8452e6e40b4b30dca31db45508c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vw67r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67ffbcfc86-jjn58_calico-system(99d456ff-41e9-43a8-8405-1dee55d2f1c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:43:31.136576 containerd[1929]: time="2025-12-12T17:43:31.136554611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:43:31.376419 containerd[1929]: time="2025-12-12T17:43:31.376339936Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:43:31.380563 containerd[1929]: time="2025-12-12T17:43:31.380417074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:43:31.380563 containerd[1929]: time="2025-12-12T17:43:31.380475812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:43:31.380868 kubelet[3460]: E1212 17:43:31.380781 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:43:31.380868 kubelet[3460]: E1212 17:43:31.380843 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:43:31.381232 kubelet[3460]: E1212 17:43:31.381052 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vw67r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67ffbcfc86-jjn58_calico-system(99d456ff-41e9-43a8-8405-1dee55d2f1c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:43:31.382284 kubelet[3460]: E1212 17:43:31.382241 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67ffbcfc86-jjn58" podUID="99d456ff-41e9-43a8-8405-1dee55d2f1c2" Dec 12 17:43:32.435275 systemd[1]: Started sshd@12-10.200.20.14:22-10.200.16.10:55532.service - OpenSSH per-connection server daemon (10.200.16.10:55532). Dec 12 17:43:32.889589 sshd[5751]: Accepted publickey for core from 10.200.16.10 port 55532 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:43:32.890838 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:32.895089 systemd-logind[1864]: New session 15 of user core. Dec 12 17:43:32.901463 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:43:33.291633 sshd[5755]: Connection closed by 10.200.16.10 port 55532 Dec 12 17:43:33.290708 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:33.297606 systemd-logind[1864]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:43:33.298064 systemd[1]: sshd@12-10.200.20.14:22-10.200.16.10:55532.service: Deactivated successfully. Dec 12 17:43:33.301868 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:43:33.304514 systemd-logind[1864]: Removed session 15. Dec 12 17:43:33.853014 containerd[1929]: time="2025-12-12T17:43:33.852898855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:43:34.126741 containerd[1929]: time="2025-12-12T17:43:34.126441808Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:43:34.130217 containerd[1929]: time="2025-12-12T17:43:34.130186916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:43:34.130275 containerd[1929]: time="2025-12-12T17:43:34.130260766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:43:34.130464 kubelet[3460]: E1212 17:43:34.130384 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:43:34.130464 kubelet[3460]: E1212 17:43:34.130450 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:43:34.131087 kubelet[3460]: E1212 17:43:34.131042 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwcqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbq92_calico-system(af9257cb-fecf-4ff2-8249-41f13bb32168): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:43:34.132993 containerd[1929]: time="2025-12-12T17:43:34.132966155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:43:34.434076 containerd[1929]: time="2025-12-12T17:43:34.433536522Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:43:34.437733 containerd[1929]: time="2025-12-12T17:43:34.437700872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:43:34.437902 containerd[1929]: time="2025-12-12T17:43:34.437754273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:43:34.438253 kubelet[3460]: E1212 17:43:34.437988 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:43:34.438253 kubelet[3460]: E1212 17:43:34.438031 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:43:34.438253 kubelet[3460]: E1212 17:43:34.438136 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwcqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tbq92_calico-system(af9257cb-fecf-4ff2-8249-41f13bb32168): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:43:34.439436 kubelet[3460]: E1212 17:43:34.439403 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:43:37.854338 containerd[1929]: time="2025-12-12T17:43:37.853973413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:43:38.105849 containerd[1929]: time="2025-12-12T17:43:38.105594825Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:43:38.109005 containerd[1929]: time="2025-12-12T17:43:38.108907172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:43:38.109005 containerd[1929]: time="2025-12-12T17:43:38.108964093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:43:38.109150 kubelet[3460]: E1212 17:43:38.109108 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:43:38.109487 kubelet[3460]: E1212 17:43:38.109159 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:43:38.109487 kubelet[3460]: E1212 17:43:38.109281 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c9gr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-cpr2l_calico-system(87bb8c15-a7ba-4def-b41d-8b2220421e40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:43:38.111012 kubelet[3460]: E1212 17:43:38.110689 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:43:38.375387 systemd[1]: Started sshd@13-10.200.20.14:22-10.200.16.10:55542.service - OpenSSH per-connection server daemon (10.200.16.10:55542). Dec 12 17:43:38.837140 sshd[5807]: Accepted publickey for core from 10.200.16.10 port 55542 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:43:38.837914 sshd-session[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:38.841803 systemd-logind[1864]: New session 16 of user core. Dec 12 17:43:38.847305 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:43:39.227595 sshd[5810]: Connection closed by 10.200.16.10 port 55542 Dec 12 17:43:39.228583 sshd-session[5807]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:39.231825 systemd[1]: sshd@13-10.200.20.14:22-10.200.16.10:55542.service: Deactivated successfully. Dec 12 17:43:39.233649 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:43:39.234902 systemd-logind[1864]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:43:39.236773 systemd-logind[1864]: Removed session 16. Dec 12 17:43:41.856062 containerd[1929]: time="2025-12-12T17:43:41.855795140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:43:42.129412 containerd[1929]: time="2025-12-12T17:43:42.129284852Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:43:42.132884 containerd[1929]: time="2025-12-12T17:43:42.132848251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:43:42.132986 containerd[1929]: time="2025-12-12T17:43:42.132853315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:43:42.133189 kubelet[3460]: E1212 17:43:42.133063 3460 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:43:42.133767 kubelet[3460]: E1212 17:43:42.133528 3460 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:43:42.133767 kubelet[3460]: E1212 17:43:42.133712 3460 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ltq49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f5b48977b-ckrpk_calico-system(174ab840-5c2d-412d-ae9d-b5c5ec64e11d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:43:42.135623 kubelet[3460]: E1212 17:43:42.135586 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:43:42.856680 kubelet[3460]: E1212 17:43:42.856585 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67ffbcfc86-jjn58" podUID="99d456ff-41e9-43a8-8405-1dee55d2f1c2" Dec 12 17:43:42.857026 kubelet[3460]: E1212 17:43:42.856959 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" podUID="a4012a8d-5664-448e-9dfd-51493bbdec98" Dec 12 17:43:44.315115 systemd[1]: Started sshd@14-10.200.20.14:22-10.200.16.10:40904.service - OpenSSH per-connection server daemon (10.200.16.10:40904). Dec 12 17:43:44.808836 sshd[5823]: Accepted publickey for core from 10.200.16.10 port 40904 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:43:44.810685 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:44.816193 systemd-logind[1864]: New session 17 of user core. Dec 12 17:43:44.822336 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:43:44.854261 kubelet[3460]: E1212 17:43:44.854169 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" podUID="ca5ba48c-4a22-4538-aeda-0de712e65e58" Dec 12 17:43:45.208772 sshd[5833]: Connection closed by 10.200.16.10 port 40904 Dec 12 17:43:45.209357 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:45.213788 systemd[1]: sshd@14-10.200.20.14:22-10.200.16.10:40904.service: Deactivated successfully. Dec 12 17:43:45.216724 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:43:45.217621 systemd-logind[1864]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:43:45.219436 systemd-logind[1864]: Removed session 17. Dec 12 17:43:45.318357 systemd[1]: Started sshd@15-10.200.20.14:22-10.200.16.10:40906.service - OpenSSH per-connection server daemon (10.200.16.10:40906). Dec 12 17:43:45.812937 sshd[5845]: Accepted publickey for core from 10.200.16.10 port 40906 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:43:45.814492 sshd-session[5845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:45.818281 systemd-logind[1864]: New session 18 of user core. Dec 12 17:43:45.827312 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:43:46.321218 sshd[5848]: Connection closed by 10.200.16.10 port 40906 Dec 12 17:43:46.340228 sshd-session[5845]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:46.343171 systemd[1]: sshd@15-10.200.20.14:22-10.200.16.10:40906.service: Deactivated successfully. Dec 12 17:43:46.347913 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:43:46.348951 systemd-logind[1864]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:43:46.350523 systemd-logind[1864]: Removed session 18. Dec 12 17:43:46.410409 systemd[1]: Started sshd@16-10.200.20.14:22-10.200.16.10:40920.service - OpenSSH per-connection server daemon (10.200.16.10:40920). Dec 12 17:43:46.915203 sshd[5858]: Accepted publickey for core from 10.200.16.10 port 40920 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:43:46.916303 sshd-session[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:46.920012 systemd-logind[1864]: New session 19 of user core. Dec 12 17:43:46.924329 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:43:47.915424 sshd[5861]: Connection closed by 10.200.16.10 port 40920 Dec 12 17:43:47.915781 sshd-session[5858]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:47.921044 systemd[1]: sshd@16-10.200.20.14:22-10.200.16.10:40920.service: Deactivated successfully. Dec 12 17:43:47.923016 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:43:47.924053 systemd-logind[1864]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:43:47.927573 systemd-logind[1864]: Removed session 19. Dec 12 17:43:48.007758 systemd[1]: Started sshd@17-10.200.20.14:22-10.200.16.10:40922.service - OpenSSH per-connection server daemon (10.200.16.10:40922). Dec 12 17:43:48.517650 sshd[5885]: Accepted publickey for core from 10.200.16.10 port 40922 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:43:48.518754 sshd-session[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:48.522652 systemd-logind[1864]: New session 20 of user core. Dec 12 17:43:48.528317 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:43:48.855004 kubelet[3460]: E1212 17:43:48.854888 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:43:49.004821 sshd[5888]: Connection closed by 10.200.16.10 port 40922 Dec 12 17:43:49.005160 sshd-session[5885]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:49.009302 systemd[1]: sshd@17-10.200.20.14:22-10.200.16.10:40922.service: Deactivated successfully. Dec 12 17:43:49.010780 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:43:49.011430 systemd-logind[1864]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:43:49.012618 systemd-logind[1864]: Removed session 20. Dec 12 17:43:49.085897 systemd[1]: Started sshd@18-10.200.20.14:22-10.200.16.10:40926.service - OpenSSH per-connection server daemon (10.200.16.10:40926). Dec 12 17:43:49.537949 sshd[5898]: Accepted publickey for core from 10.200.16.10 port 40926 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:43:49.539497 sshd-session[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:49.545687 systemd-logind[1864]: New session 21 of user core. Dec 12 17:43:49.547601 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:43:49.904644 sshd[5901]: Connection closed by 10.200.16.10 port 40926 Dec 12 17:43:49.905308 sshd-session[5898]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:49.908518 systemd[1]: sshd@18-10.200.20.14:22-10.200.16.10:40926.service: Deactivated successfully. Dec 12 17:43:49.909927 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:43:49.910587 systemd-logind[1864]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:43:49.912209 systemd-logind[1864]: Removed session 21. Dec 12 17:43:52.852984 kubelet[3460]: E1212 17:43:52.852942 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:43:53.854515 kubelet[3460]: E1212 17:43:53.854326 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:43:54.993990 systemd[1]: Started sshd@19-10.200.20.14:22-10.200.16.10:56228.service - OpenSSH per-connection server daemon (10.200.16.10:56228). Dec 12 17:43:55.485200 sshd[5915]: Accepted publickey for core from 10.200.16.10 port 56228 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:43:55.486280 sshd-session[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:43:55.489824 systemd-logind[1864]: New session 22 of user core. Dec 12 17:43:55.498293 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:43:55.887615 sshd[5918]: Connection closed by 10.200.16.10 port 56228 Dec 12 17:43:55.889311 sshd-session[5915]: pam_unix(sshd:session): session closed for user core Dec 12 17:43:55.893161 systemd-logind[1864]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:43:55.893907 systemd[1]: sshd@19-10.200.20.14:22-10.200.16.10:56228.service: Deactivated successfully. Dec 12 17:43:55.895739 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:43:55.898200 systemd-logind[1864]: Removed session 22. Dec 12 17:43:56.853135 kubelet[3460]: E1212 17:43:56.853081 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" podUID="a4012a8d-5664-448e-9dfd-51493bbdec98" Dec 12 17:43:56.853135 kubelet[3460]: E1212 17:43:56.853081 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" podUID="ca5ba48c-4a22-4538-aeda-0de712e65e58" Dec 12 17:43:56.855493 kubelet[3460]: E1212 17:43:56.854355 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67ffbcfc86-jjn58" podUID="99d456ff-41e9-43a8-8405-1dee55d2f1c2" Dec 12 17:44:00.978566 systemd[1]: Started sshd@20-10.200.20.14:22-10.200.16.10:57262.service - OpenSSH per-connection server daemon (10.200.16.10:57262). Dec 12 17:44:01.470407 sshd[5930]: Accepted publickey for core from 10.200.16.10 port 57262 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:44:01.471547 sshd-session[5930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:44:01.477166 systemd-logind[1864]: New session 23 of user core. Dec 12 17:44:01.480302 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:44:01.868346 sshd[5933]: Connection closed by 10.200.16.10 port 57262 Dec 12 17:44:01.869415 sshd-session[5930]: pam_unix(sshd:session): session closed for user core Dec 12 17:44:01.872157 systemd-logind[1864]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:44:01.873469 systemd[1]: sshd@20-10.200.20.14:22-10.200.16.10:57262.service: Deactivated successfully. Dec 12 17:44:01.875947 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:44:01.879618 systemd-logind[1864]: Removed session 23. Dec 12 17:44:03.856649 kubelet[3460]: E1212 17:44:03.856603 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:44:04.852801 kubelet[3460]: E1212 17:44:04.852752 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:44:06.961144 systemd[1]: Started sshd@21-10.200.20.14:22-10.200.16.10:57270.service - OpenSSH per-connection server daemon (10.200.16.10:57270). Dec 12 17:44:07.448237 sshd[5970]: Accepted publickey for core from 10.200.16.10 port 57270 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:44:07.449304 sshd-session[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:44:07.452905 systemd-logind[1864]: New session 24 of user core. Dec 12 17:44:07.458314 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 12 17:44:07.844746 sshd[5973]: Connection closed by 10.200.16.10 port 57270 Dec 12 17:44:07.845314 sshd-session[5970]: pam_unix(sshd:session): session closed for user core Dec 12 17:44:07.848863 systemd[1]: sshd@21-10.200.20.14:22-10.200.16.10:57270.service: Deactivated successfully. Dec 12 17:44:07.850709 systemd[1]: session-24.scope: Deactivated successfully. Dec 12 17:44:07.851426 systemd-logind[1864]: Session 24 logged out. Waiting for processes to exit. Dec 12 17:44:07.852677 systemd-logind[1864]: Removed session 24. Dec 12 17:44:07.856806 kubelet[3460]: E1212 17:44:07.856733 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67ffbcfc86-jjn58" podUID="99d456ff-41e9-43a8-8405-1dee55d2f1c2" Dec 12 17:44:08.853888 kubelet[3460]: E1212 17:44:08.853359 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-8vxtr" podUID="a4012a8d-5664-448e-9dfd-51493bbdec98" Dec 12 17:44:08.853888 kubelet[3460]: E1212 17:44:08.853384 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:44:09.854262 kubelet[3460]: E1212 17:44:09.853959 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c586fd54-zk6dm" podUID="ca5ba48c-4a22-4538-aeda-0de712e65e58" Dec 12 17:44:12.941463 systemd[1]: Started sshd@22-10.200.20.14:22-10.200.16.10:34810.service - OpenSSH per-connection server daemon (10.200.16.10:34810). Dec 12 17:44:13.426305 sshd[5985]: Accepted publickey for core from 10.200.16.10 port 34810 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:44:13.428834 sshd-session[5985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:44:13.433648 systemd-logind[1864]: New session 25 of user core. Dec 12 17:44:13.440317 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 12 17:44:13.828384 sshd[5988]: Connection closed by 10.200.16.10 port 34810 Dec 12 17:44:13.828291 sshd-session[5985]: pam_unix(sshd:session): session closed for user core Dec 12 17:44:13.832167 systemd[1]: sshd@22-10.200.20.14:22-10.200.16.10:34810.service: Deactivated successfully. Dec 12 17:44:13.832424 systemd-logind[1864]: Session 25 logged out. Waiting for processes to exit. Dec 12 17:44:13.836382 systemd[1]: session-25.scope: Deactivated successfully. Dec 12 17:44:13.840859 systemd-logind[1864]: Removed session 25. Dec 12 17:44:14.855186 kubelet[3460]: E1212 17:44:14.855128 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tbq92" podUID="af9257cb-fecf-4ff2-8249-41f13bb32168" Dec 12 17:44:16.853930 kubelet[3460]: E1212 17:44:16.853696 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-cpr2l" podUID="87bb8c15-a7ba-4def-b41d-8b2220421e40" Dec 12 17:44:18.918063 systemd[1]: Started sshd@23-10.200.20.14:22-10.200.16.10:34822.service - OpenSSH per-connection server daemon (10.200.16.10:34822). Dec 12 17:44:19.406221 sshd[6000]: Accepted publickey for core from 10.200.16.10 port 34822 ssh2: RSA SHA256:rv0ogpS37Fn9XgD1tbLDwSSen2nZukTXJG3iueJVyC4 Dec 12 17:44:19.407281 sshd-session[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:44:19.410887 systemd-logind[1864]: New session 26 of user core. Dec 12 17:44:19.417294 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 12 17:44:19.837205 sshd[6003]: Connection closed by 10.200.16.10 port 34822 Dec 12 17:44:19.837744 sshd-session[6000]: pam_unix(sshd:session): session closed for user core Dec 12 17:44:19.841596 systemd-logind[1864]: Session 26 logged out. Waiting for processes to exit. Dec 12 17:44:19.841770 systemd[1]: sshd@23-10.200.20.14:22-10.200.16.10:34822.service: Deactivated successfully. Dec 12 17:44:19.844591 systemd[1]: session-26.scope: Deactivated successfully. Dec 12 17:44:19.847049 systemd-logind[1864]: Removed session 26. Dec 12 17:44:19.854400 kubelet[3460]: E1212 17:44:19.854364 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f5b48977b-ckrpk" podUID="174ab840-5c2d-412d-ae9d-b5c5ec64e11d" Dec 12 17:44:20.854641 kubelet[3460]: E1212 17:44:20.854576 3460 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67ffbcfc86-jjn58" podUID="99d456ff-41e9-43a8-8405-1dee55d2f1c2"