Dec 16 12:28:30.104931 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Dec 16 12:28:30.104949 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 16 12:28:30.104955 kernel: KASLR enabled Dec 16 12:28:30.104959 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 16 12:28:30.104963 kernel: printk: legacy bootconsole [pl11] enabled Dec 16 12:28:30.104968 kernel: efi: EFI v2.7 by EDK II Dec 16 12:28:30.104973 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Dec 16 12:28:30.104977 kernel: random: crng init done Dec 16 12:28:30.104981 kernel: secureboot: Secure boot disabled Dec 16 12:28:30.104985 kernel: ACPI: Early table checksum verification disabled Dec 16 12:28:30.104989 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Dec 16 12:28:30.104993 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:30.104996 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:30.105001 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 16 12:28:30.105006 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:30.105011 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:30.105015 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:30.105019 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:30.105023 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:30.105028 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:30.105032 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 16 12:28:30.105036 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 16 12:28:30.105041 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 16 12:28:30.105045 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 16 12:28:30.105049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 16 12:28:30.105053 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Dec 16 12:28:30.105058 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Dec 16 12:28:30.105062 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 16 12:28:30.105066 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 16 12:28:30.105070 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 16 12:28:30.105075 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 16 12:28:30.105079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 16 12:28:30.105084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 16 12:28:30.105088 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 16 12:28:30.105092 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 16 12:28:30.105096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 16 12:28:30.105100 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Dec 16 12:28:30.105104 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Dec 16 12:28:30.105108 kernel: Zone ranges: Dec 16 12:28:30.105113 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 16 12:28:30.105119 kernel: DMA32 empty Dec 16 12:28:30.105124 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:28:30.105128 kernel: Device empty Dec 16 12:28:30.105132 kernel: Movable zone start for each node Dec 16 12:28:30.105137 kernel: Early memory node ranges Dec 16 12:28:30.105141 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 16 12:28:30.105146 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Dec 16 12:28:30.105151 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Dec 16 12:28:30.105155 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Dec 16 12:28:30.105159 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Dec 16 12:28:30.105164 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Dec 16 12:28:30.105168 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 16 12:28:30.105173 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 16 12:28:30.105177 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 16 12:28:30.105181 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Dec 16 12:28:30.105186 kernel: psci: probing for conduit method from ACPI. Dec 16 12:28:30.105190 kernel: psci: PSCIv1.3 detected in firmware. Dec 16 12:28:30.105195 kernel: psci: Using standard PSCI v0.2 function IDs Dec 16 12:28:30.105200 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 16 12:28:30.105204 kernel: psci: SMC Calling Convention v1.4 Dec 16 12:28:30.105208 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 16 12:28:30.105213 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 16 12:28:30.105217 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 16 12:28:30.105221 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 16 12:28:30.105226 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 16 12:28:30.105230 kernel: Detected PIPT I-cache on CPU0 Dec 16 12:28:30.105235 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Dec 16 12:28:30.105239 kernel: CPU features: detected: GIC system register CPU interface Dec 16 12:28:30.105244 kernel: CPU features: detected: Spectre-v4 Dec 16 12:28:30.105248 kernel: CPU features: detected: Spectre-BHB Dec 16 12:28:30.105253 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 16 12:28:30.105258 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 16 12:28:30.105262 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Dec 16 12:28:30.105266 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 16 12:28:30.105271 kernel: alternatives: applying boot alternatives Dec 16 12:28:30.105276 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:28:30.105281 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 16 12:28:30.105285 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 12:28:30.105290 kernel: Fallback order for Node 0: 0 Dec 16 12:28:30.105294 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Dec 16 12:28:30.105299 kernel: Policy zone: Normal Dec 16 12:28:30.105304 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 12:28:30.105308 kernel: software IO TLB: area num 2. Dec 16 12:28:30.105313 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Dec 16 12:28:30.105317 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 16 12:28:30.105322 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 12:28:30.105327 kernel: rcu: RCU event tracing is enabled. Dec 16 12:28:30.105331 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 16 12:28:30.105336 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 12:28:30.105340 kernel: Tracing variant of Tasks RCU enabled. Dec 16 12:28:30.105345 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 12:28:30.105349 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 16 12:28:30.105355 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:28:30.105359 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 16 12:28:30.105364 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 16 12:28:30.105368 kernel: GICv3: 960 SPIs implemented Dec 16 12:28:30.105373 kernel: GICv3: 0 Extended SPIs implemented Dec 16 12:28:30.105377 kernel: Root IRQ handler: gic_handle_irq Dec 16 12:28:30.105381 kernel: GICv3: GICv3 features: 16 PPIs, RSS Dec 16 12:28:30.105386 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Dec 16 12:28:30.105390 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 16 12:28:30.105395 kernel: ITS: No ITS available, not enabling LPIs Dec 16 12:28:30.105400 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 12:28:30.105405 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Dec 16 12:28:30.105410 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 12:28:30.105414 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Dec 16 12:28:30.105419 kernel: Console: colour dummy device 80x25 Dec 16 12:28:30.105423 kernel: printk: legacy console [tty1] enabled Dec 16 12:28:30.105428 kernel: ACPI: Core revision 20240827 Dec 16 12:28:30.105433 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Dec 16 12:28:30.105437 kernel: pid_max: default: 32768 minimum: 301 Dec 16 12:28:30.105442 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 12:28:30.105446 kernel: landlock: Up and running. Dec 16 12:28:30.105452 kernel: SELinux: Initializing. Dec 16 12:28:30.105456 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:28:30.105461 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 16 12:28:30.105466 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Dec 16 12:28:30.105470 kernel: Hyper-V: Host Build 10.0.26102.1172-1-0 Dec 16 12:28:30.105478 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 16 12:28:30.105483 kernel: rcu: Hierarchical SRCU implementation. Dec 16 12:28:30.105488 kernel: rcu: Max phase no-delay instances is 400. Dec 16 12:28:30.105493 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 16 12:28:30.105498 kernel: Remapping and enabling EFI services. Dec 16 12:28:30.105502 kernel: smp: Bringing up secondary CPUs ... Dec 16 12:28:30.105507 kernel: Detected PIPT I-cache on CPU1 Dec 16 12:28:30.105513 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 16 12:28:30.105518 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Dec 16 12:28:30.105522 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 12:28:30.105527 kernel: SMP: Total of 2 processors activated. Dec 16 12:28:30.105532 kernel: CPU: All CPU(s) started at EL1 Dec 16 12:28:30.105538 kernel: CPU features: detected: 32-bit EL0 Support Dec 16 12:28:30.105543 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 16 12:28:30.105548 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 16 12:28:30.105552 kernel: CPU features: detected: Common not Private translations Dec 16 12:28:30.105557 kernel: CPU features: detected: CRC32 instructions Dec 16 12:28:30.105562 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Dec 16 12:28:30.105567 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 16 12:28:30.105572 kernel: CPU features: detected: LSE atomic instructions Dec 16 12:28:30.105576 kernel: CPU features: detected: Privileged Access Never Dec 16 12:28:30.105582 kernel: CPU features: detected: Speculation barrier (SB) Dec 16 12:28:30.105587 kernel: CPU features: detected: TLB range maintenance instructions Dec 16 12:28:30.105591 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 16 12:28:30.105596 kernel: CPU features: detected: Scalable Vector Extension Dec 16 12:28:30.105601 kernel: alternatives: applying system-wide alternatives Dec 16 12:28:30.105606 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Dec 16 12:28:30.105611 kernel: SVE: maximum available vector length 16 bytes per vector Dec 16 12:28:30.105615 kernel: SVE: default vector length 16 bytes per vector Dec 16 12:28:30.105620 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Dec 16 12:28:30.105626 kernel: devtmpfs: initialized Dec 16 12:28:30.105631 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 12:28:30.105636 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 16 12:28:30.105641 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 16 12:28:30.105645 kernel: 0 pages in range for non-PLT usage Dec 16 12:28:30.105650 kernel: 508400 pages in range for PLT usage Dec 16 12:28:30.105655 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 12:28:30.105660 kernel: SMBIOS 3.1.0 present. Dec 16 12:28:30.105665 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Dec 16 12:28:30.105670 kernel: DMI: Memory slots populated: 2/2 Dec 16 12:28:30.105675 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 12:28:30.105680 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 16 12:28:30.105684 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 16 12:28:30.105689 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 16 12:28:30.105694 kernel: audit: initializing netlink subsys (disabled) Dec 16 12:28:30.105699 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Dec 16 12:28:30.105703 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 12:28:30.105709 kernel: cpuidle: using governor menu Dec 16 12:28:30.105714 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 16 12:28:30.105719 kernel: ASID allocator initialised with 32768 entries Dec 16 12:28:30.105723 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 12:28:30.105728 kernel: Serial: AMBA PL011 UART driver Dec 16 12:28:30.105733 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 12:28:30.105738 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 12:28:30.105742 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 16 12:28:30.105747 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 16 12:28:30.105753 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 12:28:30.105758 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 12:28:30.105763 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 16 12:28:30.105767 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 16 12:28:30.105772 kernel: ACPI: Added _OSI(Module Device) Dec 16 12:28:30.105777 kernel: ACPI: Added _OSI(Processor Device) Dec 16 12:28:30.105781 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 12:28:30.105786 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 12:28:30.105791 kernel: ACPI: Interpreter enabled Dec 16 12:28:30.105796 kernel: ACPI: Using GIC for interrupt routing Dec 16 12:28:30.105801 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 16 12:28:30.105806 kernel: printk: legacy console [ttyAMA0] enabled Dec 16 12:28:30.105811 kernel: printk: legacy bootconsole [pl11] disabled Dec 16 12:28:30.105816 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 16 12:28:30.105820 kernel: ACPI: CPU0 has been hot-added Dec 16 12:28:30.105825 kernel: ACPI: CPU1 has been hot-added Dec 16 12:28:30.105830 kernel: iommu: Default domain type: Translated Dec 16 12:28:30.105852 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 16 12:28:30.105857 kernel: efivars: Registered efivars operations Dec 16 12:28:30.105862 kernel: vgaarb: loaded Dec 16 12:28:30.105867 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 16 12:28:30.105872 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 12:28:30.105876 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 12:28:30.105881 kernel: pnp: PnP ACPI init Dec 16 12:28:30.105886 kernel: pnp: PnP ACPI: found 0 devices Dec 16 12:28:30.105891 kernel: NET: Registered PF_INET protocol family Dec 16 12:28:30.105895 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 16 12:28:30.105900 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 16 12:28:30.105906 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 12:28:30.105911 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 12:28:30.105916 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 16 12:28:30.105920 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 16 12:28:30.105925 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:28:30.105930 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 16 12:28:30.105935 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 12:28:30.105939 kernel: PCI: CLS 0 bytes, default 64 Dec 16 12:28:30.105944 kernel: kvm [1]: HYP mode not available Dec 16 12:28:30.105950 kernel: Initialise system trusted keyrings Dec 16 12:28:30.105954 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 16 12:28:30.105959 kernel: Key type asymmetric registered Dec 16 12:28:30.105964 kernel: Asymmetric key parser 'x509' registered Dec 16 12:28:30.105969 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 16 12:28:30.105973 kernel: io scheduler mq-deadline registered Dec 16 12:28:30.105978 kernel: io scheduler kyber registered Dec 16 12:28:30.105983 kernel: io scheduler bfq registered Dec 16 12:28:30.105988 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 12:28:30.105993 kernel: thunder_xcv, ver 1.0 Dec 16 12:28:30.105998 kernel: thunder_bgx, ver 1.0 Dec 16 12:28:30.106002 kernel: nicpf, ver 1.0 Dec 16 12:28:30.106007 kernel: nicvf, ver 1.0 Dec 16 12:28:30.106109 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 16 12:28:30.106159 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-16T12:28:29 UTC (1765888109) Dec 16 12:28:30.106165 kernel: efifb: probing for efifb Dec 16 12:28:30.106171 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 16 12:28:30.106176 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 16 12:28:30.106181 kernel: efifb: scrolling: redraw Dec 16 12:28:30.106186 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 16 12:28:30.106190 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:28:30.106195 kernel: fb0: EFI VGA frame buffer device Dec 16 12:28:30.106200 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 16 12:28:30.106205 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 12:28:30.106209 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 16 12:28:30.106215 kernel: NET: Registered PF_INET6 protocol family Dec 16 12:28:30.106220 kernel: watchdog: NMI not fully supported Dec 16 12:28:30.106224 kernel: watchdog: Hard watchdog permanently disabled Dec 16 12:28:30.106229 kernel: Segment Routing with IPv6 Dec 16 12:28:30.106234 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 12:28:30.106239 kernel: NET: Registered PF_PACKET protocol family Dec 16 12:28:30.106243 kernel: Key type dns_resolver registered Dec 16 12:28:30.106248 kernel: registered taskstats version 1 Dec 16 12:28:30.106253 kernel: Loading compiled-in X.509 certificates Dec 16 12:28:30.106258 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 16 12:28:30.106263 kernel: Demotion targets for Node 0: null Dec 16 12:28:30.106268 kernel: Key type .fscrypt registered Dec 16 12:28:30.106273 kernel: Key type fscrypt-provisioning registered Dec 16 12:28:30.106278 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 12:28:30.106282 kernel: ima: Allocated hash algorithm: sha1 Dec 16 12:28:30.106287 kernel: ima: No architecture policies found Dec 16 12:28:30.106292 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 16 12:28:30.106297 kernel: clk: Disabling unused clocks Dec 16 12:28:30.106301 kernel: PM: genpd: Disabling unused power domains Dec 16 12:28:30.106307 kernel: Warning: unable to open an initial console. Dec 16 12:28:30.106312 kernel: Freeing unused kernel memory: 39552K Dec 16 12:28:30.106317 kernel: Run /init as init process Dec 16 12:28:30.106321 kernel: with arguments: Dec 16 12:28:30.106326 kernel: /init Dec 16 12:28:30.106331 kernel: with environment: Dec 16 12:28:30.106336 kernel: HOME=/ Dec 16 12:28:30.106340 kernel: TERM=linux Dec 16 12:28:30.106346 systemd[1]: Successfully made /usr/ read-only. Dec 16 12:28:30.106354 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:28:30.106360 systemd[1]: Detected virtualization microsoft. Dec 16 12:28:30.106365 systemd[1]: Detected architecture arm64. Dec 16 12:28:30.106370 systemd[1]: Running in initrd. Dec 16 12:28:30.106375 systemd[1]: No hostname configured, using default hostname. Dec 16 12:28:30.106380 systemd[1]: Hostname set to . Dec 16 12:28:30.106386 systemd[1]: Initializing machine ID from random generator. Dec 16 12:28:30.106391 systemd[1]: Queued start job for default target initrd.target. Dec 16 12:28:30.106397 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:28:30.106402 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:28:30.106408 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 12:28:30.106413 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:28:30.106418 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 12:28:30.106424 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 12:28:30.106430 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 12:28:30.106436 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 12:28:30.106441 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:28:30.106446 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:28:30.106452 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:28:30.106457 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:28:30.106462 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:28:30.106467 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:28:30.106473 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:28:30.106478 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:28:30.106484 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 12:28:30.106489 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 12:28:30.106494 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:28:30.106500 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:28:30.106505 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:28:30.106510 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:28:30.106516 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 12:28:30.106521 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:28:30.106526 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 12:28:30.106532 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 12:28:30.106537 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 12:28:30.106542 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:28:30.106548 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:28:30.106563 systemd-journald[225]: Collecting audit messages is disabled. Dec 16 12:28:30.106579 systemd-journald[225]: Journal started Dec 16 12:28:30.106593 systemd-journald[225]: Runtime Journal (/run/log/journal/544573f007ee43a6830397eef6dafe03) is 8M, max 78.3M, 70.3M free. Dec 16 12:28:30.108863 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:30.113551 systemd-modules-load[227]: Inserted module 'overlay' Dec 16 12:28:30.134917 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:28:30.134951 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 12:28:30.141216 systemd-modules-load[227]: Inserted module 'br_netfilter' Dec 16 12:28:30.145246 kernel: Bridge firewalling registered Dec 16 12:28:30.144934 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 12:28:30.149821 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:28:30.165354 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 12:28:30.173333 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:28:30.178113 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:30.188696 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 12:28:30.209300 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:28:30.213823 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 12:28:30.233982 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:28:30.241652 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:28:30.258187 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:28:30.262997 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 12:28:30.274069 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 12:28:30.283472 systemd-tmpfiles[253]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 12:28:30.285364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:28:30.309135 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:28:30.322362 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:28:30.338143 dracut-cmdline[259]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 16 12:28:30.332285 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:28:30.386459 systemd-resolved[269]: Positive Trust Anchors: Dec 16 12:28:30.386472 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:28:30.410081 kernel: SCSI subsystem initialized Dec 16 12:28:30.386490 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:28:30.439282 kernel: Loading iSCSI transport class v2.0-870. Dec 16 12:28:30.391320 systemd-resolved[269]: Defaulting to hostname 'linux'. Dec 16 12:28:30.391952 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:28:30.402472 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:28:30.455851 kernel: iscsi: registered transport (tcp) Dec 16 12:28:30.467846 kernel: iscsi: registered transport (qla4xxx) Dec 16 12:28:30.467878 kernel: QLogic iSCSI HBA Driver Dec 16 12:28:30.480498 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:28:30.504145 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:28:30.515476 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:28:30.556990 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 12:28:30.565972 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 12:28:30.625852 kernel: raid6: neonx8 gen() 18534 MB/s Dec 16 12:28:30.643855 kernel: raid6: neonx4 gen() 18565 MB/s Dec 16 12:28:30.662853 kernel: raid6: neonx2 gen() 17071 MB/s Dec 16 12:28:30.682844 kernel: raid6: neonx1 gen() 15136 MB/s Dec 16 12:28:30.701841 kernel: raid6: int64x8 gen() 10546 MB/s Dec 16 12:28:30.720841 kernel: raid6: int64x4 gen() 10614 MB/s Dec 16 12:28:30.740841 kernel: raid6: int64x2 gen() 9000 MB/s Dec 16 12:28:30.762343 kernel: raid6: int64x1 gen() 7007 MB/s Dec 16 12:28:30.762408 kernel: raid6: using algorithm neonx4 gen() 18565 MB/s Dec 16 12:28:30.784219 kernel: raid6: .... xor() 15128 MB/s, rmw enabled Dec 16 12:28:30.784272 kernel: raid6: using neon recovery algorithm Dec 16 12:28:30.793093 kernel: xor: measuring software checksum speed Dec 16 12:28:30.793111 kernel: 8regs : 28641 MB/sec Dec 16 12:28:30.795626 kernel: 32regs : 28804 MB/sec Dec 16 12:28:30.801123 kernel: arm64_neon : 34873 MB/sec Dec 16 12:28:30.801131 kernel: xor: using function: arm64_neon (34873 MB/sec) Dec 16 12:28:30.839850 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 12:28:30.845452 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:28:30.854993 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:28:30.879313 systemd-udevd[473]: Using default interface naming scheme 'v255'. Dec 16 12:28:30.883442 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:28:30.896185 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 12:28:30.924282 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Dec 16 12:28:30.945067 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:28:30.950995 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:28:31.005977 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:28:31.013056 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 12:28:31.082858 kernel: hv_vmbus: Vmbus version:5.3 Dec 16 12:28:31.097627 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 16 12:28:31.097668 kernel: hv_vmbus: registering driver hid_hyperv Dec 16 12:28:31.097676 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 16 12:28:31.121329 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 16 12:28:31.121371 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 16 12:28:31.121380 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 16 12:28:31.121514 kernel: hv_vmbus: registering driver hv_netvsc Dec 16 12:28:31.121247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:31.143877 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 16 12:28:31.143895 kernel: PTP clock support registered Dec 16 12:28:31.121344 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:31.186048 kernel: hv_vmbus: registering driver hv_storvsc Dec 16 12:28:31.186069 kernel: scsi host0: storvsc_host_t Dec 16 12:28:31.186209 kernel: scsi host1: storvsc_host_t Dec 16 12:28:31.186277 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 16 12:28:31.186348 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Dec 16 12:28:31.138038 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:31.152691 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:31.176484 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:31.180902 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:31.181286 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:31.237823 kernel: hv_utils: Registering HyperV Utility Driver Dec 16 12:28:31.237851 kernel: hv_vmbus: registering driver hv_utils Dec 16 12:28:31.237859 kernel: hv_netvsc 002248ba-9a5b-0022-48ba-9a5b002248ba eth0: VF slot 1 added Dec 16 12:28:31.238009 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 16 12:28:31.195984 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:31.695662 kernel: hv_utils: Heartbeat IC version 3.0 Dec 16 12:28:31.695682 kernel: hv_utils: Shutdown IC version 3.2 Dec 16 12:28:31.695689 kernel: hv_utils: TimeSync IC version 4.0 Dec 16 12:28:31.695696 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 16 12:28:31.696873 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 16 12:28:31.696950 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 16 12:28:31.697013 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 16 12:28:31.676601 systemd-resolved[269]: Clock change detected. Flushing caches. Dec 16 12:28:31.712858 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#125 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:31.722173 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#68 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:31.723058 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:31.739161 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:28:31.744037 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 16 12:28:31.746292 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 16 12:28:31.754907 kernel: hv_vmbus: registering driver hv_pci Dec 16 12:28:31.754941 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 16 12:28:31.762173 kernel: hv_pci 7bb8c04b-6d0c-46f7-a3ce-fce2dbd59e7a: PCI VMBus probing: Using version 0x10004 Dec 16 12:28:31.762320 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 16 12:28:31.775500 kernel: hv_pci 7bb8c04b-6d0c-46f7-a3ce-fce2dbd59e7a: PCI host bridge to bus 6d0c:00 Dec 16 12:28:31.775634 kernel: pci_bus 6d0c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 16 12:28:31.780593 kernel: pci_bus 6d0c:00: No busn resource found for root bus, will use [bus 00-ff] Dec 16 12:28:31.780711 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#176 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:28:31.792340 kernel: pci 6d0c:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Dec 16 12:28:31.800195 kernel: pci 6d0c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 16 12:28:31.806174 kernel: pci 6d0c:00:02.0: enabling Extended Tags Dec 16 12:28:31.824648 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:28:31.824785 kernel: pci 6d0c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6d0c:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Dec 16 12:28:31.836337 kernel: pci_bus 6d0c:00: busn_res: [bus 00-ff] end is updated to 00 Dec 16 12:28:31.836503 kernel: pci 6d0c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Dec 16 12:28:31.894511 kernel: mlx5_core 6d0c:00:02.0: enabling device (0000 -> 0002) Dec 16 12:28:31.903269 kernel: mlx5_core 6d0c:00:02.0: PTM is not supported by PCIe Dec 16 12:28:31.903375 kernel: mlx5_core 6d0c:00:02.0: firmware version: 16.30.5006 Dec 16 12:28:32.103926 kernel: hv_netvsc 002248ba-9a5b-0022-48ba-9a5b002248ba eth0: VF registering: eth1 Dec 16 12:28:32.104122 kernel: mlx5_core 6d0c:00:02.0 eth1: joined to eth0 Dec 16 12:28:32.110427 kernel: mlx5_core 6d0c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 16 12:28:32.121167 kernel: mlx5_core 6d0c:00:02.0 enP27916s1: renamed from eth1 Dec 16 12:28:32.194165 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 16 12:28:32.291358 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 16 12:28:32.304132 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:28:32.327585 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 16 12:28:32.333698 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 16 12:28:32.348205 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 12:28:32.539247 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 12:28:32.545061 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:28:32.555243 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:28:32.566375 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:28:32.578306 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 12:28:32.605048 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:28:33.388539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#64 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Dec 16 12:28:33.405102 disk-uuid[644]: The operation has completed successfully. Dec 16 12:28:33.410012 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 16 12:28:33.483588 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 12:28:33.483679 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 12:28:33.503500 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 12:28:33.530464 sh[821]: Success Dec 16 12:28:33.566302 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 12:28:33.566347 kernel: device-mapper: uevent: version 1.0.3 Dec 16 12:28:33.572308 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 12:28:33.583377 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 16 12:28:33.859348 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 12:28:33.868803 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 12:28:33.890133 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 12:28:33.913810 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (839) Dec 16 12:28:33.913838 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 16 12:28:33.918323 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:34.149922 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 12:28:34.150011 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 12:28:34.179026 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 12:28:34.183271 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:28:34.191251 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 12:28:34.191927 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 12:28:34.215762 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 12:28:34.248179 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (862) Dec 16 12:28:34.260910 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:34.260952 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:34.286372 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:28:34.286428 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:28:34.295234 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:34.296533 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 12:28:34.306333 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 12:28:34.340008 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:28:34.351546 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:28:34.386652 systemd-networkd[1008]: lo: Link UP Dec 16 12:28:34.386661 systemd-networkd[1008]: lo: Gained carrier Dec 16 12:28:34.387393 systemd-networkd[1008]: Enumeration completed Dec 16 12:28:34.389790 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:28:34.393119 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:34.393122 systemd-networkd[1008]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:28:34.398346 systemd[1]: Reached target network.target - Network. Dec 16 12:28:34.469175 kernel: mlx5_core 6d0c:00:02.0 enP27916s1: Link up Dec 16 12:28:34.501274 kernel: hv_netvsc 002248ba-9a5b-0022-48ba-9a5b002248ba eth0: Data path switched to VF: enP27916s1 Dec 16 12:28:34.501698 systemd-networkd[1008]: enP27916s1: Link UP Dec 16 12:28:34.501759 systemd-networkd[1008]: eth0: Link UP Dec 16 12:28:34.501848 systemd-networkd[1008]: eth0: Gained carrier Dec 16 12:28:34.501862 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:34.519344 systemd-networkd[1008]: enP27916s1: Gained carrier Dec 16 12:28:34.535200 systemd-networkd[1008]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:28:35.320390 ignition[965]: Ignition 2.22.0 Dec 16 12:28:35.320404 ignition[965]: Stage: fetch-offline Dec 16 12:28:35.324280 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:28:35.320501 ignition[965]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:35.335505 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 12:28:35.320508 ignition[965]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:35.320577 ignition[965]: parsed url from cmdline: "" Dec 16 12:28:35.320580 ignition[965]: no config URL provided Dec 16 12:28:35.320587 ignition[965]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:28:35.320592 ignition[965]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:28:35.320596 ignition[965]: failed to fetch config: resource requires networking Dec 16 12:28:35.320940 ignition[965]: Ignition finished successfully Dec 16 12:28:35.374770 ignition[1022]: Ignition 2.22.0 Dec 16 12:28:35.374775 ignition[1022]: Stage: fetch Dec 16 12:28:35.375009 ignition[1022]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:35.375017 ignition[1022]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:35.375085 ignition[1022]: parsed url from cmdline: "" Dec 16 12:28:35.375087 ignition[1022]: no config URL provided Dec 16 12:28:35.375090 ignition[1022]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 12:28:35.375098 ignition[1022]: no config at "/usr/lib/ignition/user.ign" Dec 16 12:28:35.375113 ignition[1022]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 16 12:28:35.460287 ignition[1022]: GET result: OK Dec 16 12:28:35.460353 ignition[1022]: config has been read from IMDS userdata Dec 16 12:28:35.463300 unknown[1022]: fetched base config from "system" Dec 16 12:28:35.460384 ignition[1022]: parsing config with SHA512: f4bbee302022165fdef62b5e2c45985b6067139cfa11f1b1a54b0d21407339bf89f3d416c80ac8aa8c219fbfbae90d424bb6e35411e4aa391b75dd746f01e16f Dec 16 12:28:35.463305 unknown[1022]: fetched base config from "system" Dec 16 12:28:35.463547 ignition[1022]: fetch: fetch complete Dec 16 12:28:35.463309 unknown[1022]: fetched user config from "azure" Dec 16 12:28:35.463552 ignition[1022]: fetch: fetch passed Dec 16 12:28:35.467511 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 12:28:35.463586 ignition[1022]: Ignition finished successfully Dec 16 12:28:35.477278 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 12:28:35.511311 ignition[1028]: Ignition 2.22.0 Dec 16 12:28:35.511324 ignition[1028]: Stage: kargs Dec 16 12:28:35.511474 ignition[1028]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:35.517689 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 12:28:35.511480 ignition[1028]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:35.525883 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 12:28:35.511997 ignition[1028]: kargs: kargs passed Dec 16 12:28:35.512031 ignition[1028]: Ignition finished successfully Dec 16 12:28:35.555562 ignition[1034]: Ignition 2.22.0 Dec 16 12:28:35.555578 ignition[1034]: Stage: disks Dec 16 12:28:35.559627 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 12:28:35.555730 ignition[1034]: no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:35.565978 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 12:28:35.555737 ignition[1034]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:35.574380 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 12:28:35.556214 ignition[1034]: disks: disks passed Dec 16 12:28:35.583098 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:28:35.556249 ignition[1034]: Ignition finished successfully Dec 16 12:28:35.591833 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:28:35.600740 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:28:35.609799 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 12:28:35.689449 systemd-fsck[1042]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Dec 16 12:28:35.699133 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 12:28:35.706909 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 12:28:35.919170 kernel: EXT4-fs (sda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 16 12:28:35.919929 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 12:28:35.923701 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 12:28:35.947393 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:28:35.964655 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 12:28:35.973126 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 16 12:28:35.984496 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 12:28:35.984522 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:28:35.990435 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 12:28:36.009292 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 12:28:36.029467 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1056) Dec 16 12:28:36.040690 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:36.040730 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:36.051555 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:28:36.051594 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:28:36.054578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:28:36.263421 systemd-networkd[1008]: eth0: Gained IPv6LL Dec 16 12:28:36.428942 coreos-metadata[1058]: Dec 16 12:28:36.428 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:28:36.437446 coreos-metadata[1058]: Dec 16 12:28:36.437 INFO Fetch successful Dec 16 12:28:36.442063 coreos-metadata[1058]: Dec 16 12:28:36.441 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:28:36.451832 coreos-metadata[1058]: Dec 16 12:28:36.451 INFO Fetch successful Dec 16 12:28:36.466100 coreos-metadata[1058]: Dec 16 12:28:36.466 INFO wrote hostname ci-4459.2.2-a-e780e4b687 to /sysroot/etc/hostname Dec 16 12:28:36.473169 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:28:36.662313 initrd-setup-root[1086]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 12:28:36.697884 initrd-setup-root[1093]: cut: /sysroot/etc/group: No such file or directory Dec 16 12:28:36.723795 initrd-setup-root[1100]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 12:28:36.730913 initrd-setup-root[1107]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 12:28:37.681491 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 12:28:37.688381 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 12:28:37.705747 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 12:28:37.722495 kernel: BTRFS info (device sda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:37.717624 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 12:28:37.743961 ignition[1176]: INFO : Ignition 2.22.0 Dec 16 12:28:37.748351 ignition[1176]: INFO : Stage: mount Dec 16 12:28:37.748351 ignition[1176]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:37.748351 ignition[1176]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:37.748351 ignition[1176]: INFO : mount: mount passed Dec 16 12:28:37.748351 ignition[1176]: INFO : Ignition finished successfully Dec 16 12:28:37.754421 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 12:28:37.761267 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 12:28:37.771407 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 12:28:37.797318 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 12:28:37.827174 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1187) Dec 16 12:28:37.838384 kernel: BTRFS info (device sda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 16 12:28:37.838420 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 16 12:28:37.848782 kernel: BTRFS info (device sda6): turning on async discard Dec 16 12:28:37.848820 kernel: BTRFS info (device sda6): enabling free space tree Dec 16 12:28:37.850219 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 12:28:37.880294 ignition[1204]: INFO : Ignition 2.22.0 Dec 16 12:28:37.883763 ignition[1204]: INFO : Stage: files Dec 16 12:28:37.883763 ignition[1204]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:37.883763 ignition[1204]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:37.883763 ignition[1204]: DEBUG : files: compiled without relabeling support, skipping Dec 16 12:28:37.901719 ignition[1204]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 12:28:37.901719 ignition[1204]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 12:28:37.940383 ignition[1204]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 12:28:37.945759 ignition[1204]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 12:28:37.945759 ignition[1204]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 12:28:37.940730 unknown[1204]: wrote ssh authorized keys file for user: core Dec 16 12:28:38.012351 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:28:38.020729 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 16 12:28:38.066763 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 12:28:38.175116 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 16 12:28:38.175116 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 12:28:38.175116 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 12:28:38.175116 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:28:38.175116 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 12:28:38.175116 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:28:38.175116 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 12:28:38.175116 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:28:38.175116 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 12:28:38.246711 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:28:38.246711 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 12:28:38.246711 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:28:38.246711 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:28:38.246711 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:28:38.246711 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 16 12:28:38.732045 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 12:28:38.921160 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 16 12:28:38.921160 ignition[1204]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 12:28:38.961044 ignition[1204]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:28:38.974216 ignition[1204]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 12:28:38.974216 ignition[1204]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 12:28:38.974216 ignition[1204]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 12:28:38.974216 ignition[1204]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 12:28:39.009396 ignition[1204]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:28:39.009396 ignition[1204]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 12:28:39.009396 ignition[1204]: INFO : files: files passed Dec 16 12:28:39.009396 ignition[1204]: INFO : Ignition finished successfully Dec 16 12:28:38.985352 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 12:28:38.993452 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 12:28:39.028014 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 12:28:39.039959 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 12:28:39.040211 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 12:28:39.065709 initrd-setup-root-after-ignition[1234]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:28:39.065709 initrd-setup-root-after-ignition[1234]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:28:39.083554 initrd-setup-root-after-ignition[1238]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 12:28:39.068498 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:28:39.077091 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 12:28:39.088862 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 12:28:39.147536 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 12:28:39.147646 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 12:28:39.157225 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 12:28:39.166219 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 12:28:39.174266 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 12:28:39.174952 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 12:28:39.207205 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:28:39.213684 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 12:28:39.238202 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:28:39.242976 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:28:39.252203 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 12:28:39.260566 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 12:28:39.260659 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 12:28:39.272874 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 12:28:39.277684 systemd[1]: Stopped target basic.target - Basic System. Dec 16 12:28:39.285629 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 12:28:39.294042 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 12:28:39.302078 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 12:28:39.311351 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 12:28:39.320055 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 12:28:39.328486 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 12:28:39.337343 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 12:28:39.345741 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 12:28:39.354756 systemd[1]: Stopped target swap.target - Swaps. Dec 16 12:28:39.361868 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 12:28:39.361976 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 12:28:39.372746 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:28:39.377303 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:28:39.386162 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 12:28:39.390079 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:28:39.395403 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 12:28:39.395498 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 12:28:39.408829 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 12:28:39.408910 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 12:28:39.414025 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 12:28:39.414093 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 12:28:39.421783 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 16 12:28:39.493236 ignition[1258]: INFO : Ignition 2.22.0 Dec 16 12:28:39.493236 ignition[1258]: INFO : Stage: umount Dec 16 12:28:39.493236 ignition[1258]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 12:28:39.493236 ignition[1258]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 16 12:28:39.493236 ignition[1258]: INFO : umount: umount passed Dec 16 12:28:39.493236 ignition[1258]: INFO : Ignition finished successfully Dec 16 12:28:39.421847 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 16 12:28:39.433357 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 12:28:39.447102 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 12:28:39.447226 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:28:39.466987 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 12:28:39.477131 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 12:28:39.477262 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:28:39.488116 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 12:28:39.488212 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 12:28:39.501887 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 12:28:39.501976 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 12:28:39.509228 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 12:28:39.509312 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 12:28:39.517923 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 12:28:39.517966 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 12:28:39.522175 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 12:28:39.522206 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 12:28:39.531324 systemd[1]: Stopped target network.target - Network. Dec 16 12:28:39.538365 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 12:28:39.538429 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 12:28:39.543584 systemd[1]: Stopped target paths.target - Path Units. Dec 16 12:28:39.556355 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 12:28:39.560318 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:28:39.565613 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 12:28:39.572972 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 12:28:39.581589 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 12:28:39.581637 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 12:28:39.589503 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 12:28:39.589530 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 12:28:39.597409 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 12:28:39.597457 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 12:28:39.606293 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 12:28:39.606319 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 12:28:39.616119 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 12:28:39.624375 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 12:28:39.632881 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 12:28:39.633368 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 12:28:39.633442 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 12:28:39.642926 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 12:28:39.642996 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 12:28:39.660056 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 12:28:39.849349 kernel: hv_netvsc 002248ba-9a5b-0022-48ba-9a5b002248ba eth0: Data path switched from VF: enP27916s1 Dec 16 12:28:39.660251 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 12:28:39.660347 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 12:28:39.671913 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 12:28:39.674143 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 12:28:39.679995 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 12:28:39.680031 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:28:39.688710 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 12:28:39.702832 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 12:28:39.702897 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 12:28:39.710948 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 12:28:39.710988 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:28:39.725986 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 12:28:39.726025 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 12:28:39.730451 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 12:28:39.730485 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:28:39.742233 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:28:39.750638 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 12:28:39.750690 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:39.776858 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 12:28:39.777910 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:28:39.785287 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 12:28:39.785321 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 12:28:39.793341 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 12:28:39.793367 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:28:39.801834 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 12:28:39.801888 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 12:28:39.814310 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 12:28:39.814358 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 12:28:39.834127 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 12:28:39.834188 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 12:28:39.854329 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 12:28:39.870363 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 12:28:39.870431 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:28:39.890909 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 12:28:39.890956 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:28:39.900842 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:39.900885 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:39.911687 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 12:28:39.911731 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 12:28:39.911767 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 12:28:39.912083 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 12:28:39.912214 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 12:28:40.110347 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Dec 16 12:28:39.921593 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 12:28:39.923177 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 12:28:39.930559 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 12:28:39.930653 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 12:28:39.954318 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 12:28:39.956190 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 12:28:39.963841 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 12:28:39.974379 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 12:28:40.010439 systemd[1]: Switching root. Dec 16 12:28:40.145908 systemd-journald[225]: Journal stopped Dec 16 12:28:43.966142 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 12:28:43.966175 kernel: SELinux: policy capability open_perms=1 Dec 16 12:28:43.966182 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 12:28:43.966188 kernel: SELinux: policy capability always_check_network=0 Dec 16 12:28:43.966193 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 12:28:43.966199 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 12:28:43.966205 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 12:28:43.966211 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 12:28:43.966216 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 12:28:43.966221 kernel: audit: type=1403 audit(1765888121.088:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 12:28:43.966228 systemd[1]: Successfully loaded SELinux policy in 147.080ms. Dec 16 12:28:43.966236 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.402ms. Dec 16 12:28:43.966242 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 12:28:43.966249 systemd[1]: Detected virtualization microsoft. Dec 16 12:28:43.966256 systemd[1]: Detected architecture arm64. Dec 16 12:28:43.966262 systemd[1]: Detected first boot. Dec 16 12:28:43.966269 systemd[1]: Hostname set to . Dec 16 12:28:43.966275 systemd[1]: Initializing machine ID from random generator. Dec 16 12:28:43.966281 zram_generator::config[1304]: No configuration found. Dec 16 12:28:43.966287 kernel: NET: Registered PF_VSOCK protocol family Dec 16 12:28:43.966293 systemd[1]: Populated /etc with preset unit settings. Dec 16 12:28:43.966299 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 12:28:43.966305 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 12:28:43.966311 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 12:28:43.966317 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 12:28:43.966323 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 12:28:43.966330 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 12:28:43.966336 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 12:28:43.966342 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 12:28:43.966348 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 12:28:43.966355 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 12:28:43.966361 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 12:28:43.966367 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 12:28:43.966373 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 12:28:43.966379 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 12:28:43.966385 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 12:28:43.966392 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 12:28:43.966398 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 12:28:43.966405 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 12:28:43.966411 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 16 12:28:43.966419 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 12:28:43.966425 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 12:28:43.966431 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 12:28:43.966438 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 12:28:43.966444 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 12:28:43.966450 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 12:28:43.966457 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 12:28:43.966463 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 12:28:43.966469 systemd[1]: Reached target slices.target - Slice Units. Dec 16 12:28:43.966476 systemd[1]: Reached target swap.target - Swaps. Dec 16 12:28:43.966482 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 12:28:43.966488 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 12:28:43.966495 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 12:28:43.966501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 12:28:43.966508 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 12:28:43.966514 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 12:28:43.966520 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 12:28:43.966527 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 12:28:43.966533 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 12:28:43.966539 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 12:28:43.966546 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 12:28:43.966552 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 12:28:43.966558 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 12:28:43.966564 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 12:28:43.966571 systemd[1]: Reached target machines.target - Containers. Dec 16 12:28:43.966577 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 12:28:43.966583 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:28:43.966590 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 12:28:43.966596 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 12:28:43.966603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:28:43.966609 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:28:43.966615 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:28:43.966621 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 12:28:43.966627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:28:43.966634 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 12:28:43.966640 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 12:28:43.966647 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 12:28:43.966654 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 12:28:43.966660 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 12:28:43.966667 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:28:43.966673 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 12:28:43.966679 kernel: fuse: init (API version 7.41) Dec 16 12:28:43.966684 kernel: loop: module loaded Dec 16 12:28:43.966690 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 12:28:43.966697 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 12:28:43.966703 kernel: ACPI: bus type drm_connector registered Dec 16 12:28:43.966709 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 12:28:43.966728 systemd-journald[1401]: Collecting audit messages is disabled. Dec 16 12:28:43.966743 systemd-journald[1401]: Journal started Dec 16 12:28:43.966758 systemd-journald[1401]: Runtime Journal (/run/log/journal/a0743158bf4f4c438b5249e509dbcab6) is 8M, max 78.3M, 70.3M free. Dec 16 12:28:43.251241 systemd[1]: Queued start job for default target multi-user.target. Dec 16 12:28:43.255610 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 16 12:28:43.255993 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 12:28:43.256261 systemd[1]: systemd-journald.service: Consumed 2.478s CPU time. Dec 16 12:28:43.990988 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 12:28:44.002856 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 12:28:44.010029 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 12:28:44.010072 systemd[1]: Stopped verity-setup.service. Dec 16 12:28:44.027885 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 12:28:44.028497 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 12:28:44.032919 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 12:28:44.037457 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 12:28:44.041412 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 12:28:44.046316 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 12:28:44.051271 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 12:28:44.055190 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 12:28:44.060236 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 12:28:44.066437 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 12:28:44.066572 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 12:28:44.072066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:28:44.072229 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:28:44.077793 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:28:44.077904 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:28:44.083118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:28:44.083301 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:28:44.089108 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 12:28:44.089317 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 12:28:44.094487 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:28:44.094607 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:28:44.099724 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 12:28:44.104931 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 12:28:44.110816 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 12:28:44.116657 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 12:28:44.122422 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 12:28:44.136466 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 12:28:44.142036 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 12:28:44.159194 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 12:28:44.163655 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 12:28:44.163682 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 12:28:44.168566 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 12:28:44.175237 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 12:28:44.179610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:28:44.189742 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 12:28:44.194708 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 12:28:44.201399 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:28:44.202112 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 12:28:44.207814 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:28:44.208685 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 12:28:44.215267 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 12:28:44.221028 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 12:28:44.228567 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 12:28:44.233907 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 12:28:44.243480 systemd-journald[1401]: Time spent on flushing to /var/log/journal/a0743158bf4f4c438b5249e509dbcab6 is 13.665ms for 932 entries. Dec 16 12:28:44.243480 systemd-journald[1401]: System Journal (/var/log/journal/a0743158bf4f4c438b5249e509dbcab6) is 8M, max 2.6G, 2.6G free. Dec 16 12:28:44.284865 systemd-journald[1401]: Received client request to flush runtime journal. Dec 16 12:28:44.284897 kernel: loop0: detected capacity change from 0 to 211168 Dec 16 12:28:44.260691 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 12:28:44.268049 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 12:28:44.275241 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 12:28:44.286480 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 12:28:44.298923 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 12:28:44.326221 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 12:28:44.350325 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 12:28:44.351802 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 12:28:44.363170 kernel: loop1: detected capacity change from 0 to 119840 Dec 16 12:28:44.373948 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 12:28:44.380496 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 12:28:44.456005 systemd-tmpfiles[1459]: ACLs are not supported, ignoring. Dec 16 12:28:44.456201 systemd-tmpfiles[1459]: ACLs are not supported, ignoring. Dec 16 12:28:44.458935 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 12:28:44.732182 kernel: loop2: detected capacity change from 0 to 27936 Dec 16 12:28:44.735053 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 12:28:44.741401 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 12:28:44.764488 systemd-udevd[1464]: Using default interface naming scheme 'v255'. Dec 16 12:28:44.923218 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 12:28:44.933274 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 12:28:44.991059 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 12:28:44.999953 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 16 12:28:45.053726 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 12:28:45.088174 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 12:28:45.105425 kernel: hv_vmbus: registering driver hv_balloon Dec 16 12:28:45.105508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#132 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Dec 16 12:28:45.105715 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 16 12:28:45.112164 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 16 12:28:45.146124 kernel: hv_vmbus: registering driver hyperv_fb Dec 16 12:28:45.151286 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 16 12:28:45.151305 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 16 12:28:45.156939 kernel: Console: switching to colour dummy device 80x25 Dec 16 12:28:45.160173 kernel: Console: switching to colour frame buffer device 128x48 Dec 16 12:28:45.171672 kernel: loop3: detected capacity change from 0 to 100632 Dec 16 12:28:45.171998 systemd-networkd[1469]: lo: Link UP Dec 16 12:28:45.172008 systemd-networkd[1469]: lo: Gained carrier Dec 16 12:28:45.173820 systemd-networkd[1469]: Enumeration completed Dec 16 12:28:45.173909 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 12:28:45.179320 systemd-networkd[1469]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:45.179329 systemd-networkd[1469]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:28:45.184848 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 12:28:45.194829 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 12:28:45.238171 kernel: mlx5_core 6d0c:00:02.0 enP27916s1: Link up Dec 16 12:28:45.246379 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:45.260728 kernel: hv_netvsc 002248ba-9a5b-0022-48ba-9a5b002248ba eth0: Data path switched to VF: enP27916s1 Dec 16 12:28:45.260400 systemd-networkd[1469]: enP27916s1: Link UP Dec 16 12:28:45.260889 systemd-networkd[1469]: eth0: Link UP Dec 16 12:28:45.260892 systemd-networkd[1469]: eth0: Gained carrier Dec 16 12:28:45.260912 systemd-networkd[1469]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:45.262226 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 12:28:45.269484 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:45.271242 systemd-networkd[1469]: enP27916s1: Gained carrier Dec 16 12:28:45.271519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:45.283771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:45.288427 systemd-networkd[1469]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:28:45.297500 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 12:28:45.297643 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:45.306037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 12:28:45.317164 kernel: MACsec IEEE 802.1AE Dec 16 12:28:45.361485 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 16 12:28:45.368353 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 12:28:45.415851 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 12:28:45.565177 kernel: loop4: detected capacity change from 0 to 211168 Dec 16 12:28:45.580175 kernel: loop5: detected capacity change from 0 to 119840 Dec 16 12:28:45.593181 kernel: loop6: detected capacity change from 0 to 27936 Dec 16 12:28:45.604181 kernel: loop7: detected capacity change from 0 to 100632 Dec 16 12:28:45.611713 (sd-merge)[1607]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 16 12:28:45.612059 (sd-merge)[1607]: Merged extensions into '/usr'. Dec 16 12:28:45.615887 systemd[1]: Reload requested from client PID 1443 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 12:28:45.615900 systemd[1]: Reloading... Dec 16 12:28:45.669217 zram_generator::config[1638]: No configuration found. Dec 16 12:28:45.834673 systemd[1]: Reloading finished in 218 ms. Dec 16 12:28:45.858087 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 12:28:45.864637 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 12:28:45.875103 systemd[1]: Starting ensure-sysext.service... Dec 16 12:28:45.881256 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 12:28:45.899215 systemd[1]: Reload requested from client PID 1695 ('systemctl') (unit ensure-sysext.service)... Dec 16 12:28:45.899226 systemd[1]: Reloading... Dec 16 12:28:45.932981 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 12:28:45.933340 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 12:28:45.933792 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 12:28:45.934203 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 12:28:45.934803 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 12:28:45.935336 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Dec 16 12:28:45.935459 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Dec 16 12:28:45.952047 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:28:45.953173 systemd-tmpfiles[1696]: Skipping /boot Dec 16 12:28:45.958519 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 12:28:45.958609 systemd-tmpfiles[1696]: Skipping /boot Dec 16 12:28:45.962174 zram_generator::config[1724]: No configuration found. Dec 16 12:28:46.120656 systemd[1]: Reloading finished in 221 ms. Dec 16 12:28:46.135590 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 12:28:46.156431 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:28:46.165846 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 12:28:46.171419 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:28:46.174336 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:28:46.181309 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:28:46.189379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:28:46.194548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:28:46.194641 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:28:46.196902 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 12:28:46.203261 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 12:28:46.210304 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 12:28:46.218918 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:28:46.219774 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:28:46.226398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:28:46.226535 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:28:46.233072 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:28:46.233246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:28:46.243641 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:28:46.247357 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:28:46.255707 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:28:46.265599 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:28:46.273523 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:28:46.273624 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:28:46.276615 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 12:28:46.286760 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:28:46.286894 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:28:46.293006 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:28:46.293121 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:28:46.299303 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:28:46.299423 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:28:46.305703 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 12:28:46.318268 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 12:28:46.319261 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 12:28:46.327023 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 12:28:46.336936 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 12:28:46.343631 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 12:28:46.349384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 12:28:46.349843 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 12:28:46.350435 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 12:28:46.356508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 12:28:46.357273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 12:28:46.362871 systemd-resolved[1791]: Positive Trust Anchors: Dec 16 12:28:46.363226 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 12:28:46.363354 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 12:28:46.363686 systemd-resolved[1791]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 12:28:46.363708 systemd-resolved[1791]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 12:28:46.369023 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 12:28:46.369432 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 12:28:46.370236 systemd-resolved[1791]: Using system hostname 'ci-4459.2.2-a-e780e4b687'. Dec 16 12:28:46.376213 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 12:28:46.381946 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 12:28:46.382121 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 12:28:46.390468 systemd[1]: Finished ensure-sysext.service. Dec 16 12:28:46.394377 augenrules[1829]: No rules Dec 16 12:28:46.395315 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:28:46.396217 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:28:46.403518 systemd[1]: Reached target network.target - Network. Dec 16 12:28:46.408586 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 12:28:46.414642 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 12:28:46.414953 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 12:28:46.843419 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 12:28:46.849442 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 12:28:47.015355 systemd-networkd[1469]: eth0: Gained IPv6LL Dec 16 12:28:47.019550 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 12:28:47.025516 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 12:28:49.284437 ldconfig[1438]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 12:28:49.296759 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 12:28:49.303033 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 12:28:49.315378 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 12:28:49.320031 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 12:28:49.324491 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 12:28:49.330012 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 12:28:49.335334 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 12:28:49.339828 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 12:28:49.345237 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 12:28:49.350526 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 12:28:49.350551 systemd[1]: Reached target paths.target - Path Units. Dec 16 12:28:49.354176 systemd[1]: Reached target timers.target - Timer Units. Dec 16 12:28:49.372125 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 12:28:49.377820 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 12:28:49.383014 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 12:28:49.388313 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 12:28:49.393461 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 12:28:49.399213 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 12:28:49.403600 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 12:28:49.408995 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 12:28:49.413465 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 12:28:49.417164 systemd[1]: Reached target basic.target - Basic System. Dec 16 12:28:49.420856 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:28:49.420879 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 12:28:49.422860 systemd[1]: Starting chronyd.service - NTP client/server... Dec 16 12:28:49.438243 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 12:28:49.443208 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 12:28:49.450272 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 12:28:49.456925 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 12:28:49.464241 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 12:28:49.471286 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 12:28:49.475592 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 12:28:49.476307 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 16 12:28:49.481890 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 16 12:28:49.488917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:28:49.496267 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 12:28:49.501677 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 12:28:49.506839 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 12:28:49.513065 chronyd[1845]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 16 12:28:49.514051 KVP[1855]: KVP starting; pid is:1855 Dec 16 12:28:49.515357 extend-filesystems[1854]: Found /dev/sda6 Dec 16 12:28:49.526394 kernel: hv_utils: KVP IC version 4.0 Dec 16 12:28:49.518890 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 12:28:49.524514 KVP[1855]: KVP LIC Version: 3.1 Dec 16 12:28:49.534279 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 12:28:49.537090 chronyd[1845]: Timezone right/UTC failed leap second check, ignoring Dec 16 12:28:49.538792 chronyd[1845]: Loaded seccomp filter (level 2) Dec 16 12:28:49.539579 extend-filesystems[1854]: Found /dev/sda9 Dec 16 12:28:49.555541 extend-filesystems[1854]: Checking size of /dev/sda9 Dec 16 12:28:49.561546 jq[1850]: false Dec 16 12:28:49.548526 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 12:28:49.555340 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 12:28:49.559482 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 12:28:49.560140 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 12:28:49.569600 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 12:28:49.579245 systemd[1]: Started chronyd.service - NTP client/server. Dec 16 12:28:49.588173 extend-filesystems[1854]: Old size kept for /dev/sda9 Dec 16 12:28:49.606418 jq[1879]: true Dec 16 12:28:49.592278 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 12:28:49.607463 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 12:28:49.607761 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 12:28:49.608062 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 12:28:49.608318 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 12:28:49.614313 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 12:28:49.615570 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 12:28:49.622636 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 12:28:49.631571 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 12:28:49.633261 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 12:28:49.668748 update_engine[1878]: I20251216 12:28:49.668670 1878 main.cc:92] Flatcar Update Engine starting Dec 16 12:28:49.672482 (ntainerd)[1906]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 12:28:49.673362 systemd-logind[1872]: New seat seat0. Dec 16 12:28:49.673962 systemd-logind[1872]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 16 12:28:49.674107 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 12:28:49.685856 jq[1902]: true Dec 16 12:28:49.765730 tar[1898]: linux-arm64/LICENSE Dec 16 12:28:49.766183 tar[1898]: linux-arm64/helm Dec 16 12:28:49.781768 dbus-daemon[1848]: [system] SELinux support is enabled Dec 16 12:28:49.781917 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 12:28:49.786844 update_engine[1878]: I20251216 12:28:49.785621 1878 update_check_scheduler.cc:74] Next update check in 11m45s Dec 16 12:28:49.790030 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 12:28:49.790138 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 12:28:49.799653 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 12:28:49.799670 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 12:28:49.805235 bash[1951]: Updated "/home/core/.ssh/authorized_keys" Dec 16 12:28:49.807758 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 12:28:49.824266 dbus-daemon[1848]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 16 12:28:49.835729 systemd[1]: Started update-engine.service - Update Engine. Dec 16 12:28:49.845870 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 16 12:28:49.850003 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 12:28:49.864933 coreos-metadata[1847]: Dec 16 12:28:49.864 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 16 12:28:49.867077 coreos-metadata[1847]: Dec 16 12:28:49.867 INFO Fetch successful Dec 16 12:28:49.867197 coreos-metadata[1847]: Dec 16 12:28:49.867 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 16 12:28:49.871965 coreos-metadata[1847]: Dec 16 12:28:49.871 INFO Fetch successful Dec 16 12:28:49.872037 coreos-metadata[1847]: Dec 16 12:28:49.872 INFO Fetching http://168.63.129.16/machine/c68cda54-4abd-4107-ace8-0ebd11edaec2/f95d741a%2Dadec%2D4179%2D83b9%2D39226f580e18.%5Fci%2D4459.2.2%2Da%2De780e4b687?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 16 12:28:49.873928 coreos-metadata[1847]: Dec 16 12:28:49.873 INFO Fetch successful Dec 16 12:28:49.874145 coreos-metadata[1847]: Dec 16 12:28:49.874 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 16 12:28:49.887928 coreos-metadata[1847]: Dec 16 12:28:49.887 INFO Fetch successful Dec 16 12:28:49.931529 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 12:28:49.939539 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 12:28:50.072165 locksmithd[1993]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 12:28:50.086476 sshd_keygen[1886]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 12:28:50.104203 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 12:28:50.110662 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 12:28:50.118719 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 16 12:28:50.139844 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 12:28:50.144290 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 12:28:50.152424 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 12:28:50.161270 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 16 12:28:50.178785 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 12:28:50.187994 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 12:28:50.195581 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 16 12:28:50.203060 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 12:28:50.251230 tar[1898]: linux-arm64/README.md Dec 16 12:28:50.267082 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 12:28:50.328947 containerd[1906]: time="2025-12-16T12:28:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 12:28:50.331340 containerd[1906]: time="2025-12-16T12:28:50.331308100Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 12:28:50.339180 containerd[1906]: time="2025-12-16T12:28:50.338653628Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.344µs" Dec 16 12:28:50.339180 containerd[1906]: time="2025-12-16T12:28:50.338682068Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 12:28:50.339180 containerd[1906]: time="2025-12-16T12:28:50.338695548Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 12:28:50.339180 containerd[1906]: time="2025-12-16T12:28:50.338811500Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 12:28:50.339180 containerd[1906]: time="2025-12-16T12:28:50.338821876Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 12:28:50.339180 containerd[1906]: time="2025-12-16T12:28:50.338836820Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:28:50.339180 containerd[1906]: time="2025-12-16T12:28:50.338870204Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 12:28:50.339180 containerd[1906]: time="2025-12-16T12:28:50.338876596Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:28:50.339180 containerd[1906]: time="2025-12-16T12:28:50.339036908Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 12:28:50.339180 containerd[1906]: time="2025-12-16T12:28:50.339046612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:28:50.339180 containerd[1906]: time="2025-12-16T12:28:50.339054252Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 12:28:50.339180 containerd[1906]: time="2025-12-16T12:28:50.339059828Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 12:28:50.339395 containerd[1906]: time="2025-12-16T12:28:50.339124788Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 12:28:50.339593 containerd[1906]: time="2025-12-16T12:28:50.339572420Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:28:50.339682 containerd[1906]: time="2025-12-16T12:28:50.339666780Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 12:28:50.339725 containerd[1906]: time="2025-12-16T12:28:50.339715644Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 12:28:50.339790 containerd[1906]: time="2025-12-16T12:28:50.339778436Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 12:28:50.339978 containerd[1906]: time="2025-12-16T12:28:50.339963468Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 12:28:50.340089 containerd[1906]: time="2025-12-16T12:28:50.340076644Z" level=info msg="metadata content store policy set" policy=shared Dec 16 12:28:50.356522 containerd[1906]: time="2025-12-16T12:28:50.356498724Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 12:28:50.356636 containerd[1906]: time="2025-12-16T12:28:50.356622220Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 12:28:50.356780 containerd[1906]: time="2025-12-16T12:28:50.356764028Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 12:28:50.356863 containerd[1906]: time="2025-12-16T12:28:50.356850060Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 12:28:50.356936 containerd[1906]: time="2025-12-16T12:28:50.356922924Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 12:28:50.356990 containerd[1906]: time="2025-12-16T12:28:50.356971036Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 12:28:50.357040 containerd[1906]: time="2025-12-16T12:28:50.357028572Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 12:28:50.357107 containerd[1906]: time="2025-12-16T12:28:50.357083372Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 12:28:50.357180 containerd[1906]: time="2025-12-16T12:28:50.357146684Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 12:28:50.357231 containerd[1906]: time="2025-12-16T12:28:50.357218076Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 12:28:50.357279 containerd[1906]: time="2025-12-16T12:28:50.357269604Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 12:28:50.357347 containerd[1906]: time="2025-12-16T12:28:50.357325212Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 12:28:50.357514 containerd[1906]: time="2025-12-16T12:28:50.357495076Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 12:28:50.357609 containerd[1906]: time="2025-12-16T12:28:50.357596748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 12:28:50.357735 containerd[1906]: time="2025-12-16T12:28:50.357669676Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 12:28:50.357735 containerd[1906]: time="2025-12-16T12:28:50.357684004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 12:28:50.357735 containerd[1906]: time="2025-12-16T12:28:50.357692412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 12:28:50.357735 containerd[1906]: time="2025-12-16T12:28:50.357699972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 12:28:50.357735 containerd[1906]: time="2025-12-16T12:28:50.357707908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 12:28:50.357735 containerd[1906]: time="2025-12-16T12:28:50.357714100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 12:28:50.357870 containerd[1906]: time="2025-12-16T12:28:50.357723420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 12:28:50.357918 containerd[1906]: time="2025-12-16T12:28:50.357906516Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 12:28:50.357968 containerd[1906]: time="2025-12-16T12:28:50.357958476Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 12:28:50.358104 containerd[1906]: time="2025-12-16T12:28:50.358061796Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 12:28:50.358104 containerd[1906]: time="2025-12-16T12:28:50.358077172Z" level=info msg="Start snapshots syncer" Dec 16 12:28:50.358244 containerd[1906]: time="2025-12-16T12:28:50.358177228Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 12:28:50.358534 containerd[1906]: time="2025-12-16T12:28:50.358502812Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 12:28:50.358676 containerd[1906]: time="2025-12-16T12:28:50.358662796Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 12:28:50.358792 containerd[1906]: time="2025-12-16T12:28:50.358778516Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 12:28:50.358970 containerd[1906]: time="2025-12-16T12:28:50.358955324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 12:28:50.359121 containerd[1906]: time="2025-12-16T12:28:50.359021476Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 12:28:50.359121 containerd[1906]: time="2025-12-16T12:28:50.359034220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 12:28:50.359121 containerd[1906]: time="2025-12-16T12:28:50.359047508Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 12:28:50.359121 containerd[1906]: time="2025-12-16T12:28:50.359056388Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 12:28:50.359121 containerd[1906]: time="2025-12-16T12:28:50.359065908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 12:28:50.359240 containerd[1906]: time="2025-12-16T12:28:50.359073116Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 12:28:50.359303 containerd[1906]: time="2025-12-16T12:28:50.359284260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 12:28:50.359353 containerd[1906]: time="2025-12-16T12:28:50.359343540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 12:28:50.359398 containerd[1906]: time="2025-12-16T12:28:50.359388316Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 12:28:50.359503 containerd[1906]: time="2025-12-16T12:28:50.359463548Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:28:50.359503 containerd[1906]: time="2025-12-16T12:28:50.359479700Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 12:28:50.359503 containerd[1906]: time="2025-12-16T12:28:50.359485524Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:28:50.359503 containerd[1906]: time="2025-12-16T12:28:50.359491652Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 12:28:50.359613 containerd[1906]: time="2025-12-16T12:28:50.359597676Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 12:28:50.359656 containerd[1906]: time="2025-12-16T12:28:50.359646044Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 12:28:50.359776 containerd[1906]: time="2025-12-16T12:28:50.359682868Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 12:28:50.359776 containerd[1906]: time="2025-12-16T12:28:50.359701476Z" level=info msg="runtime interface created" Dec 16 12:28:50.359776 containerd[1906]: time="2025-12-16T12:28:50.359705628Z" level=info msg="created NRI interface" Dec 16 12:28:50.359776 containerd[1906]: time="2025-12-16T12:28:50.359712612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 12:28:50.359776 containerd[1906]: time="2025-12-16T12:28:50.359722252Z" level=info msg="Connect containerd service" Dec 16 12:28:50.359870 containerd[1906]: time="2025-12-16T12:28:50.359856204Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 12:28:50.360629 containerd[1906]: time="2025-12-16T12:28:50.360590324Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:28:50.485207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:28:50.490860 (kubelet)[2050]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:28:50.640255 containerd[1906]: time="2025-12-16T12:28:50.640123652Z" level=info msg="Start subscribing containerd event" Dec 16 12:28:50.640398 containerd[1906]: time="2025-12-16T12:28:50.640384084Z" level=info msg="Start recovering state" Dec 16 12:28:50.640510 containerd[1906]: time="2025-12-16T12:28:50.640193260Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 12:28:50.640585 containerd[1906]: time="2025-12-16T12:28:50.640545468Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 12:28:50.640664 containerd[1906]: time="2025-12-16T12:28:50.640650188Z" level=info msg="Start event monitor" Dec 16 12:28:50.640716 containerd[1906]: time="2025-12-16T12:28:50.640703388Z" level=info msg="Start cni network conf syncer for default" Dec 16 12:28:50.640752 containerd[1906]: time="2025-12-16T12:28:50.640742068Z" level=info msg="Start streaming server" Dec 16 12:28:50.640794 containerd[1906]: time="2025-12-16T12:28:50.640783580Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 12:28:50.640827 containerd[1906]: time="2025-12-16T12:28:50.640817276Z" level=info msg="runtime interface starting up..." Dec 16 12:28:50.640859 containerd[1906]: time="2025-12-16T12:28:50.640848188Z" level=info msg="starting plugins..." Dec 16 12:28:50.640906 containerd[1906]: time="2025-12-16T12:28:50.640896228Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 12:28:50.641349 containerd[1906]: time="2025-12-16T12:28:50.641329060Z" level=info msg="containerd successfully booted in 0.312774s" Dec 16 12:28:50.641462 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 12:28:50.647339 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 12:28:50.651878 systemd[1]: Startup finished in 1.710s (kernel) + 10.874s (initrd) + 9.708s (userspace) = 22.292s. Dec 16 12:28:50.865401 kubelet[2050]: E1216 12:28:50.865343 2050 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:28:50.867652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:28:50.867865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:28:50.869264 systemd[1]: kubelet.service: Consumed 560ms CPU time, 258.4M memory peak. Dec 16 12:28:50.955111 login[2030]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:50.956780 login[2031]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:28:50.964089 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 12:28:50.964908 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 12:28:50.966764 systemd-logind[1872]: New session 1 of user core. Dec 16 12:28:50.970653 systemd-logind[1872]: New session 2 of user core. Dec 16 12:28:50.980331 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 12:28:50.982048 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 12:28:50.989787 (systemd)[2068]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 12:28:50.991427 systemd-logind[1872]: New session c1 of user core. Dec 16 12:28:51.109476 systemd[2068]: Queued start job for default target default.target. Dec 16 12:28:51.117897 systemd[2068]: Created slice app.slice - User Application Slice. Dec 16 12:28:51.117920 systemd[2068]: Reached target paths.target - Paths. Dec 16 12:28:51.117953 systemd[2068]: Reached target timers.target - Timers. Dec 16 12:28:51.118945 systemd[2068]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 12:28:51.126010 systemd[2068]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 12:28:51.126055 systemd[2068]: Reached target sockets.target - Sockets. Dec 16 12:28:51.126086 systemd[2068]: Reached target basic.target - Basic System. Dec 16 12:28:51.126106 systemd[2068]: Reached target default.target - Main User Target. Dec 16 12:28:51.126125 systemd[2068]: Startup finished in 128ms. Dec 16 12:28:51.126391 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 12:28:51.128520 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 12:28:51.129008 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 12:28:51.595939 waagent[2027]: 2025-12-16T12:28:51.595864Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Dec 16 12:28:51.604462 waagent[2027]: 2025-12-16T12:28:51.600977Z INFO Daemon Daemon OS: flatcar 4459.2.2 Dec 16 12:28:51.604649 waagent[2027]: 2025-12-16T12:28:51.604613Z INFO Daemon Daemon Python: 3.11.13 Dec 16 12:28:51.607883 waagent[2027]: 2025-12-16T12:28:51.607842Z INFO Daemon Daemon Run daemon Dec 16 12:28:51.611147 waagent[2027]: 2025-12-16T12:28:51.611090Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.2' Dec 16 12:28:51.617709 waagent[2027]: 2025-12-16T12:28:51.617679Z INFO Daemon Daemon Using waagent for provisioning Dec 16 12:28:51.621528 waagent[2027]: 2025-12-16T12:28:51.621495Z INFO Daemon Daemon Activate resource disk Dec 16 12:28:51.625198 waagent[2027]: 2025-12-16T12:28:51.625168Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 16 12:28:51.633247 waagent[2027]: 2025-12-16T12:28:51.633208Z INFO Daemon Daemon Found device: None Dec 16 12:28:51.636465 waagent[2027]: 2025-12-16T12:28:51.636434Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 16 12:28:51.643462 waagent[2027]: 2025-12-16T12:28:51.643432Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 16 12:28:51.651953 waagent[2027]: 2025-12-16T12:28:51.651911Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:28:51.656645 waagent[2027]: 2025-12-16T12:28:51.656613Z INFO Daemon Daemon Running default provisioning handler Dec 16 12:28:51.665712 waagent[2027]: 2025-12-16T12:28:51.665658Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 16 12:28:51.675738 waagent[2027]: 2025-12-16T12:28:51.675695Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 16 12:28:51.682595 waagent[2027]: 2025-12-16T12:28:51.682558Z INFO Daemon Daemon cloud-init is enabled: False Dec 16 12:28:51.686474 waagent[2027]: 2025-12-16T12:28:51.686431Z INFO Daemon Daemon Copying ovf-env.xml Dec 16 12:28:51.733544 waagent[2027]: 2025-12-16T12:28:51.733046Z INFO Daemon Daemon Successfully mounted dvd Dec 16 12:28:51.758653 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 16 12:28:51.761041 waagent[2027]: 2025-12-16T12:28:51.760990Z INFO Daemon Daemon Detect protocol endpoint Dec 16 12:28:51.764737 waagent[2027]: 2025-12-16T12:28:51.764701Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 16 12:28:51.769030 waagent[2027]: 2025-12-16T12:28:51.769000Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 16 12:28:51.773795 waagent[2027]: 2025-12-16T12:28:51.773770Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 16 12:28:51.777790 waagent[2027]: 2025-12-16T12:28:51.777760Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 16 12:28:51.781755 waagent[2027]: 2025-12-16T12:28:51.781728Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 16 12:28:51.821066 waagent[2027]: 2025-12-16T12:28:51.821022Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 16 12:28:51.826158 waagent[2027]: 2025-12-16T12:28:51.826132Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 16 12:28:51.830165 waagent[2027]: 2025-12-16T12:28:51.830132Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 16 12:28:51.918035 waagent[2027]: 2025-12-16T12:28:51.917902Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 16 12:28:51.923253 waagent[2027]: 2025-12-16T12:28:51.923213Z INFO Daemon Daemon Forcing an update of the goal state. Dec 16 12:28:51.930926 waagent[2027]: 2025-12-16T12:28:51.930886Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:28:51.972157 waagent[2027]: 2025-12-16T12:28:51.972105Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Dec 16 12:28:51.976763 waagent[2027]: 2025-12-16T12:28:51.976723Z INFO Daemon Dec 16 12:28:51.979079 waagent[2027]: 2025-12-16T12:28:51.979049Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: d1b23958-0406-42f7-a842-a907c288a784 eTag: 9253206773665065626 source: Fabric] Dec 16 12:28:51.987462 waagent[2027]: 2025-12-16T12:28:51.987427Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 16 12:28:51.992230 waagent[2027]: 2025-12-16T12:28:51.992199Z INFO Daemon Dec 16 12:28:51.994135 waagent[2027]: 2025-12-16T12:28:51.994105Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:28:52.003019 waagent[2027]: 2025-12-16T12:28:52.002989Z INFO Daemon Daemon Downloading artifacts profile blob Dec 16 12:28:52.069189 waagent[2027]: 2025-12-16T12:28:52.069031Z INFO Daemon Downloaded certificate {'thumbprint': 'A21A605E0634B2F513F0C30ADB5CA2673EF17791', 'hasPrivateKey': True} Dec 16 12:28:52.076266 waagent[2027]: 2025-12-16T12:28:52.076226Z INFO Daemon Fetch goal state completed Dec 16 12:28:52.089269 waagent[2027]: 2025-12-16T12:28:52.089238Z INFO Daemon Daemon Starting provisioning Dec 16 12:28:52.093572 waagent[2027]: 2025-12-16T12:28:52.093530Z INFO Daemon Daemon Handle ovf-env.xml. Dec 16 12:28:52.097484 waagent[2027]: 2025-12-16T12:28:52.097456Z INFO Daemon Daemon Set hostname [ci-4459.2.2-a-e780e4b687] Dec 16 12:28:52.176085 waagent[2027]: 2025-12-16T12:28:52.176019Z INFO Daemon Daemon Publish hostname [ci-4459.2.2-a-e780e4b687] Dec 16 12:28:52.181400 waagent[2027]: 2025-12-16T12:28:52.181348Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 16 12:28:52.186142 waagent[2027]: 2025-12-16T12:28:52.186105Z INFO Daemon Daemon Primary interface is [eth0] Dec 16 12:28:52.196020 systemd-networkd[1469]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 12:28:52.196273 systemd-networkd[1469]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 12:28:52.196330 systemd-networkd[1469]: eth0: DHCP lease lost Dec 16 12:28:52.197065 waagent[2027]: 2025-12-16T12:28:52.197016Z INFO Daemon Daemon Create user account if not exists Dec 16 12:28:52.201336 waagent[2027]: 2025-12-16T12:28:52.201298Z INFO Daemon Daemon User core already exists, skip useradd Dec 16 12:28:52.205856 waagent[2027]: 2025-12-16T12:28:52.205815Z INFO Daemon Daemon Configure sudoer Dec 16 12:28:52.216664 waagent[2027]: 2025-12-16T12:28:52.213528Z INFO Daemon Daemon Configure sshd Dec 16 12:28:52.220853 waagent[2027]: 2025-12-16T12:28:52.220808Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 16 12:28:52.231197 systemd-networkd[1469]: eth0: DHCPv4 address 10.200.20.38/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 16 12:28:52.231333 waagent[2027]: 2025-12-16T12:28:52.231208Z INFO Daemon Daemon Deploy ssh public key. Dec 16 12:28:53.303330 waagent[2027]: 2025-12-16T12:28:53.303284Z INFO Daemon Daemon Provisioning complete Dec 16 12:28:53.317690 waagent[2027]: 2025-12-16T12:28:53.317652Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 16 12:28:53.322876 waagent[2027]: 2025-12-16T12:28:53.322843Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 16 12:28:53.330880 waagent[2027]: 2025-12-16T12:28:53.330850Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Dec 16 12:28:53.430178 waagent[2118]: 2025-12-16T12:28:53.430103Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Dec 16 12:28:53.430461 waagent[2118]: 2025-12-16T12:28:53.430252Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.2 Dec 16 12:28:53.430461 waagent[2118]: 2025-12-16T12:28:53.430291Z INFO ExtHandler ExtHandler Python: 3.11.13 Dec 16 12:28:53.430461 waagent[2118]: 2025-12-16T12:28:53.430325Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Dec 16 12:28:53.464379 waagent[2118]: 2025-12-16T12:28:53.464313Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Dec 16 12:28:53.464531 waagent[2118]: 2025-12-16T12:28:53.464504Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:28:53.464570 waagent[2118]: 2025-12-16T12:28:53.464553Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:28:53.470437 waagent[2118]: 2025-12-16T12:28:53.470390Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 16 12:28:53.475395 waagent[2118]: 2025-12-16T12:28:53.475361Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Dec 16 12:28:53.475758 waagent[2118]: 2025-12-16T12:28:53.475723Z INFO ExtHandler Dec 16 12:28:53.475810 waagent[2118]: 2025-12-16T12:28:53.475792Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2a01d6cd-5176-463d-912c-13a8edb6b528 eTag: 9253206773665065626 source: Fabric] Dec 16 12:28:53.476030 waagent[2118]: 2025-12-16T12:28:53.476002Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 16 12:28:53.476457 waagent[2118]: 2025-12-16T12:28:53.476424Z INFO ExtHandler Dec 16 12:28:53.476497 waagent[2118]: 2025-12-16T12:28:53.476479Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 16 12:28:53.479827 waagent[2118]: 2025-12-16T12:28:53.479799Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 16 12:28:53.532895 waagent[2118]: 2025-12-16T12:28:53.532833Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A21A605E0634B2F513F0C30ADB5CA2673EF17791', 'hasPrivateKey': True} Dec 16 12:28:53.533274 waagent[2118]: 2025-12-16T12:28:53.533238Z INFO ExtHandler Fetch goal state completed Dec 16 12:28:53.545646 waagent[2118]: 2025-12-16T12:28:53.545595Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Dec 16 12:28:53.548943 waagent[2118]: 2025-12-16T12:28:53.548896Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2118 Dec 16 12:28:53.549040 waagent[2118]: 2025-12-16T12:28:53.549015Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 16 12:28:53.549303 waagent[2118]: 2025-12-16T12:28:53.549273Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Dec 16 12:28:53.550384 waagent[2118]: 2025-12-16T12:28:53.550351Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] Dec 16 12:28:53.550701 waagent[2118]: 2025-12-16T12:28:53.550669Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.2', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Dec 16 12:28:53.550814 waagent[2118]: 2025-12-16T12:28:53.550791Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Dec 16 12:28:53.551250 waagent[2118]: 2025-12-16T12:28:53.551217Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 16 12:28:53.583253 waagent[2118]: 2025-12-16T12:28:53.583218Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 16 12:28:53.583417 waagent[2118]: 2025-12-16T12:28:53.583388Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 16 12:28:53.587860 waagent[2118]: 2025-12-16T12:28:53.587833Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 16 12:28:53.592538 systemd[1]: Reload requested from client PID 2133 ('systemctl') (unit waagent.service)... Dec 16 12:28:53.592748 systemd[1]: Reloading... Dec 16 12:28:53.663279 zram_generator::config[2181]: No configuration found. Dec 16 12:28:53.811330 systemd[1]: Reloading finished in 218 ms. Dec 16 12:28:53.821400 waagent[2118]: 2025-12-16T12:28:53.821331Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 16 12:28:53.823181 waagent[2118]: 2025-12-16T12:28:53.821473Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 16 12:28:54.258896 waagent[2118]: 2025-12-16T12:28:54.258107Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 16 12:28:54.258896 waagent[2118]: 2025-12-16T12:28:54.258429Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Dec 16 12:28:54.259112 waagent[2118]: 2025-12-16T12:28:54.259058Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 16 12:28:54.259214 waagent[2118]: 2025-12-16T12:28:54.259178Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:28:54.259310 waagent[2118]: 2025-12-16T12:28:54.259287Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:28:54.259488 waagent[2118]: 2025-12-16T12:28:54.259458Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 16 12:28:54.259829 waagent[2118]: 2025-12-16T12:28:54.259791Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 16 12:28:54.259955 waagent[2118]: 2025-12-16T12:28:54.259917Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 16 12:28:54.259955 waagent[2118]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 16 12:28:54.259955 waagent[2118]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 16 12:28:54.259955 waagent[2118]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 16 12:28:54.259955 waagent[2118]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:28:54.259955 waagent[2118]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:28:54.259955 waagent[2118]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 16 12:28:54.260482 waagent[2118]: 2025-12-16T12:28:54.260385Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 16 12:28:54.260482 waagent[2118]: 2025-12-16T12:28:54.260444Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 16 12:28:54.260550 waagent[2118]: 2025-12-16T12:28:54.260524Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 16 12:28:54.260658 waagent[2118]: 2025-12-16T12:28:54.260631Z INFO EnvHandler ExtHandler Configure routes Dec 16 12:28:54.260701 waagent[2118]: 2025-12-16T12:28:54.260680Z INFO EnvHandler ExtHandler Gateway:None Dec 16 12:28:54.260725 waagent[2118]: 2025-12-16T12:28:54.260712Z INFO EnvHandler ExtHandler Routes:None Dec 16 12:28:54.260862 waagent[2118]: 2025-12-16T12:28:54.260817Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 16 12:28:54.261556 waagent[2118]: 2025-12-16T12:28:54.261524Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 16 12:28:54.261661 waagent[2118]: 2025-12-16T12:28:54.261619Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 16 12:28:54.261750 waagent[2118]: 2025-12-16T12:28:54.261724Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 16 12:28:54.269179 waagent[2118]: 2025-12-16T12:28:54.267986Z INFO ExtHandler ExtHandler Dec 16 12:28:54.269179 waagent[2118]: 2025-12-16T12:28:54.268054Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 22d194dc-8640-4783-8e97-fc5aadd77644 correlation f5832def-ba97-4723-847e-a82d6c72f3c0 created: 2025-12-16T12:28:01.890457Z] Dec 16 12:28:54.269179 waagent[2118]: 2025-12-16T12:28:54.268340Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 16 12:28:54.269179 waagent[2118]: 2025-12-16T12:28:54.268732Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Dec 16 12:28:54.295423 waagent[2118]: 2025-12-16T12:28:54.295366Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Dec 16 12:28:54.295423 waagent[2118]: Try `iptables -h' or 'iptables --help' for more information.) Dec 16 12:28:54.295740 waagent[2118]: 2025-12-16T12:28:54.295706Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: DF1E9B9B-6E96-449F-AFC9-6FC661C681D1;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Dec 16 12:28:54.335233 waagent[2118]: 2025-12-16T12:28:54.334508Z INFO MonitorHandler ExtHandler Network interfaces: Dec 16 12:28:54.335233 waagent[2118]: Executing ['ip', '-a', '-o', 'link']: Dec 16 12:28:54.335233 waagent[2118]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 16 12:28:54.335233 waagent[2118]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:ba:9a:5b brd ff:ff:ff:ff:ff:ff Dec 16 12:28:54.335233 waagent[2118]: 3: enP27916s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:ba:9a:5b brd ff:ff:ff:ff:ff:ff\ altname enP27916p0s2 Dec 16 12:28:54.335233 waagent[2118]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 16 12:28:54.335233 waagent[2118]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 16 12:28:54.335233 waagent[2118]: 2: eth0 inet 10.200.20.38/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 16 12:28:54.335233 waagent[2118]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 16 12:28:54.335233 waagent[2118]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 16 12:28:54.335233 waagent[2118]: 2: eth0 inet6 fe80::222:48ff:feba:9a5b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 16 12:28:54.353450 waagent[2118]: 2025-12-16T12:28:54.353404Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Dec 16 12:28:54.353450 waagent[2118]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:28:54.353450 waagent[2118]: pkts bytes target prot opt in out source destination Dec 16 12:28:54.353450 waagent[2118]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:28:54.353450 waagent[2118]: pkts bytes target prot opt in out source destination Dec 16 12:28:54.353450 waagent[2118]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:28:54.353450 waagent[2118]: pkts bytes target prot opt in out source destination Dec 16 12:28:54.353450 waagent[2118]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:28:54.353450 waagent[2118]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:28:54.353450 waagent[2118]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:28:54.356935 waagent[2118]: 2025-12-16T12:28:54.356643Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 16 12:28:54.356935 waagent[2118]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:28:54.356935 waagent[2118]: pkts bytes target prot opt in out source destination Dec 16 12:28:54.356935 waagent[2118]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:28:54.356935 waagent[2118]: pkts bytes target prot opt in out source destination Dec 16 12:28:54.356935 waagent[2118]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 16 12:28:54.356935 waagent[2118]: pkts bytes target prot opt in out source destination Dec 16 12:28:54.356935 waagent[2118]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 16 12:28:54.356935 waagent[2118]: 2 112 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 16 12:28:54.356935 waagent[2118]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 16 12:28:54.356935 waagent[2118]: 2025-12-16T12:28:54.356854Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 16 12:29:01.079521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 12:29:01.081192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:01.186103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:01.188981 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:01.328907 kubelet[2267]: E1216 12:29:01.328851 2267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:01.332029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:01.332257 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:01.332803 systemd[1]: kubelet.service: Consumed 111ms CPU time, 104.3M memory peak. Dec 16 12:29:11.579632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 12:29:11.582321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:11.926575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:11.931537 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:11.959896 kubelet[2281]: E1216 12:29:11.959849 2281 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:11.962004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:11.962120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:11.962528 systemd[1]: kubelet.service: Consumed 106ms CPU time, 107.3M memory peak. Dec 16 12:29:13.347341 chronyd[1845]: Selected source PHC0 Dec 16 12:29:18.275975 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 12:29:18.277504 systemd[1]: Started sshd@0-10.200.20.38:22-10.200.16.10:51280.service - OpenSSH per-connection server daemon (10.200.16.10:51280). Dec 16 12:29:18.868613 sshd[2289]: Accepted publickey for core from 10.200.16.10 port 51280 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:18.869697 sshd-session[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:18.873238 systemd-logind[1872]: New session 3 of user core. Dec 16 12:29:18.889458 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 12:29:19.315306 systemd[1]: Started sshd@1-10.200.20.38:22-10.200.16.10:51292.service - OpenSSH per-connection server daemon (10.200.16.10:51292). Dec 16 12:29:19.806159 sshd[2295]: Accepted publickey for core from 10.200.16.10 port 51292 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:19.807233 sshd-session[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:19.810857 systemd-logind[1872]: New session 4 of user core. Dec 16 12:29:19.821365 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 12:29:20.157796 sshd[2298]: Connection closed by 10.200.16.10 port 51292 Dec 16 12:29:20.158352 sshd-session[2295]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:20.161424 systemd[1]: sshd@1-10.200.20.38:22-10.200.16.10:51292.service: Deactivated successfully. Dec 16 12:29:20.162780 systemd[1]: session-4.scope: Deactivated successfully. Dec 16 12:29:20.163400 systemd-logind[1872]: Session 4 logged out. Waiting for processes to exit. Dec 16 12:29:20.164380 systemd-logind[1872]: Removed session 4. Dec 16 12:29:20.243623 systemd[1]: Started sshd@2-10.200.20.38:22-10.200.16.10:47704.service - OpenSSH per-connection server daemon (10.200.16.10:47704). Dec 16 12:29:20.730669 sshd[2304]: Accepted publickey for core from 10.200.16.10 port 47704 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:20.731760 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:20.735245 systemd-logind[1872]: New session 5 of user core. Dec 16 12:29:20.743445 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 12:29:21.078264 sshd[2307]: Connection closed by 10.200.16.10 port 47704 Dec 16 12:29:21.078939 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:21.082307 systemd[1]: sshd@2-10.200.20.38:22-10.200.16.10:47704.service: Deactivated successfully. Dec 16 12:29:21.083682 systemd[1]: session-5.scope: Deactivated successfully. Dec 16 12:29:21.084295 systemd-logind[1872]: Session 5 logged out. Waiting for processes to exit. Dec 16 12:29:21.085309 systemd-logind[1872]: Removed session 5. Dec 16 12:29:21.168675 systemd[1]: Started sshd@3-10.200.20.38:22-10.200.16.10:47720.service - OpenSSH per-connection server daemon (10.200.16.10:47720). Dec 16 12:29:21.654765 sshd[2313]: Accepted publickey for core from 10.200.16.10 port 47720 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:21.655875 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:21.659688 systemd-logind[1872]: New session 6 of user core. Dec 16 12:29:21.669363 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 12:29:22.004750 sshd[2316]: Connection closed by 10.200.16.10 port 47720 Dec 16 12:29:22.004147 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:22.007694 systemd[1]: sshd@3-10.200.20.38:22-10.200.16.10:47720.service: Deactivated successfully. Dec 16 12:29:22.009009 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 12:29:22.010563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 12:29:22.011210 systemd-logind[1872]: Session 6 logged out. Waiting for processes to exit. Dec 16 12:29:22.012573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:22.014533 systemd-logind[1872]: Removed session 6. Dec 16 12:29:22.085790 systemd[1]: Started sshd@4-10.200.20.38:22-10.200.16.10:47732.service - OpenSSH per-connection server daemon (10.200.16.10:47732). Dec 16 12:29:22.145421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:22.155399 (kubelet)[2333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:22.216855 kubelet[2333]: E1216 12:29:22.216798 2333 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:22.219003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:22.219111 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:22.219615 systemd[1]: kubelet.service: Consumed 105ms CPU time, 104.2M memory peak. Dec 16 12:29:22.540926 sshd[2325]: Accepted publickey for core from 10.200.16.10 port 47732 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:22.541985 sshd-session[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:22.545636 systemd-logind[1872]: New session 7 of user core. Dec 16 12:29:22.560525 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 12:29:22.925739 sudo[2341]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 16 12:29:22.925953 sudo[2341]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:22.954507 sudo[2341]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:23.031503 sshd[2340]: Connection closed by 10.200.16.10 port 47732 Dec 16 12:29:23.032192 sshd-session[2325]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:23.035974 systemd-logind[1872]: Session 7 logged out. Waiting for processes to exit. Dec 16 12:29:23.036519 systemd[1]: sshd@4-10.200.20.38:22-10.200.16.10:47732.service: Deactivated successfully. Dec 16 12:29:23.037851 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 12:29:23.040370 systemd-logind[1872]: Removed session 7. Dec 16 12:29:23.117374 systemd[1]: Started sshd@5-10.200.20.38:22-10.200.16.10:47746.service - OpenSSH per-connection server daemon (10.200.16.10:47746). Dec 16 12:29:23.567633 sshd[2347]: Accepted publickey for core from 10.200.16.10 port 47746 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:23.568771 sshd-session[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:23.572317 systemd-logind[1872]: New session 8 of user core. Dec 16 12:29:23.582252 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 12:29:23.823030 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 16 12:29:23.823324 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:23.829464 sudo[2352]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:23.833028 sudo[2351]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 16 12:29:23.833341 sudo[2351]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:23.839892 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 12:29:23.867876 augenrules[2374]: No rules Dec 16 12:29:23.869104 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 12:29:23.869411 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 12:29:23.870568 sudo[2351]: pam_unix(sudo:session): session closed for user root Dec 16 12:29:23.947449 sshd[2350]: Connection closed by 10.200.16.10 port 47746 Dec 16 12:29:23.948212 sshd-session[2347]: pam_unix(sshd:session): session closed for user core Dec 16 12:29:23.951732 systemd[1]: sshd@5-10.200.20.38:22-10.200.16.10:47746.service: Deactivated successfully. Dec 16 12:29:23.953017 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 12:29:23.953584 systemd-logind[1872]: Session 8 logged out. Waiting for processes to exit. Dec 16 12:29:23.954583 systemd-logind[1872]: Removed session 8. Dec 16 12:29:24.035473 systemd[1]: Started sshd@6-10.200.20.38:22-10.200.16.10:47756.service - OpenSSH per-connection server daemon (10.200.16.10:47756). Dec 16 12:29:24.523694 sshd[2383]: Accepted publickey for core from 10.200.16.10 port 47756 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:29:24.524795 sshd-session[2383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:29:24.528287 systemd-logind[1872]: New session 9 of user core. Dec 16 12:29:24.536266 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 12:29:24.798504 sudo[2387]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 12:29:24.798717 sudo[2387]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 12:29:25.913374 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 12:29:25.923403 (dockerd)[2405]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 12:29:26.566979 dockerd[2405]: time="2025-12-16T12:29:26.566923266Z" level=info msg="Starting up" Dec 16 12:29:26.567867 dockerd[2405]: time="2025-12-16T12:29:26.567839554Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 12:29:26.575249 dockerd[2405]: time="2025-12-16T12:29:26.575218275Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 12:29:26.696510 dockerd[2405]: time="2025-12-16T12:29:26.696338562Z" level=info msg="Loading containers: start." Dec 16 12:29:26.711170 kernel: Initializing XFRM netlink socket Dec 16 12:29:26.991920 systemd-networkd[1469]: docker0: Link UP Dec 16 12:29:27.011744 dockerd[2405]: time="2025-12-16T12:29:27.011701383Z" level=info msg="Loading containers: done." Dec 16 12:29:27.021131 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1596562993-merged.mount: Deactivated successfully. Dec 16 12:29:27.034983 dockerd[2405]: time="2025-12-16T12:29:27.034897982Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 12:29:27.035119 dockerd[2405]: time="2025-12-16T12:29:27.034993672Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 12:29:27.035119 dockerd[2405]: time="2025-12-16T12:29:27.035069066Z" level=info msg="Initializing buildkit" Dec 16 12:29:27.087696 dockerd[2405]: time="2025-12-16T12:29:27.087651041Z" level=info msg="Completed buildkit initialization" Dec 16 12:29:27.090647 dockerd[2405]: time="2025-12-16T12:29:27.090618071Z" level=info msg="Daemon has completed initialization" Dec 16 12:29:27.090775 dockerd[2405]: time="2025-12-16T12:29:27.090733978Z" level=info msg="API listen on /run/docker.sock" Dec 16 12:29:27.090838 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 12:29:27.915424 containerd[1906]: time="2025-12-16T12:29:27.915361413Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 16 12:29:28.695134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401829047.mount: Deactivated successfully. Dec 16 12:29:29.862198 containerd[1906]: time="2025-12-16T12:29:29.861994798Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:29.865202 containerd[1906]: time="2025-12-16T12:29:29.865165656Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Dec 16 12:29:29.868366 containerd[1906]: time="2025-12-16T12:29:29.868320489Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:29.873792 containerd[1906]: time="2025-12-16T12:29:29.873741404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:29.874560 containerd[1906]: time="2025-12-16T12:29:29.874260806Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.958705884s" Dec 16 12:29:29.874560 containerd[1906]: time="2025-12-16T12:29:29.874292143Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 16 12:29:29.875626 containerd[1906]: time="2025-12-16T12:29:29.875602887Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 16 12:29:31.080878 containerd[1906]: time="2025-12-16T12:29:31.080814428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:31.085598 containerd[1906]: time="2025-12-16T12:29:31.085570126Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Dec 16 12:29:31.088662 containerd[1906]: time="2025-12-16T12:29:31.088633812Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:31.094039 containerd[1906]: time="2025-12-16T12:29:31.094009597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:31.095092 containerd[1906]: time="2025-12-16T12:29:31.095055115Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.21942426s" Dec 16 12:29:31.095117 containerd[1906]: time="2025-12-16T12:29:31.095098053Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 16 12:29:31.095663 containerd[1906]: time="2025-12-16T12:29:31.095560445Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 16 12:29:32.329393 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 12:29:32.330710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:32.424855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:32.430354 (kubelet)[2687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:32.556960 kubelet[2687]: E1216 12:29:32.556899 2687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:32.559254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:32.559473 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:32.559992 systemd[1]: kubelet.service: Consumed 104ms CPU time, 104.7M memory peak. Dec 16 12:29:32.853189 containerd[1906]: time="2025-12-16T12:29:32.852743678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:32.856689 containerd[1906]: time="2025-12-16T12:29:32.856660538Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Dec 16 12:29:32.859959 containerd[1906]: time="2025-12-16T12:29:32.859932496Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:32.865982 containerd[1906]: time="2025-12-16T12:29:32.865955376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:32.866500 containerd[1906]: time="2025-12-16T12:29:32.866470843Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.770786338s" Dec 16 12:29:32.866602 containerd[1906]: time="2025-12-16T12:29:32.866588607Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 16 12:29:32.867128 containerd[1906]: time="2025-12-16T12:29:32.867052384Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 16 12:29:33.213145 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 16 12:29:34.769229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount71599061.mount: Deactivated successfully. Dec 16 12:29:35.087864 containerd[1906]: time="2025-12-16T12:29:35.087815700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:35.091625 containerd[1906]: time="2025-12-16T12:29:35.091596940Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Dec 16 12:29:35.094789 containerd[1906]: time="2025-12-16T12:29:35.094760606Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:35.100141 containerd[1906]: time="2025-12-16T12:29:35.100111718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:35.100700 containerd[1906]: time="2025-12-16T12:29:35.100389520Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 2.233147362s" Dec 16 12:29:35.100700 containerd[1906]: time="2025-12-16T12:29:35.100412841Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 16 12:29:35.100846 containerd[1906]: time="2025-12-16T12:29:35.100818592Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 16 12:29:35.368256 update_engine[1878]: I20251216 12:29:35.367828 1878 update_attempter.cc:509] Updating boot flags... Dec 16 12:29:35.773616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3213533097.mount: Deactivated successfully. Dec 16 12:29:36.744447 containerd[1906]: time="2025-12-16T12:29:36.744391128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:36.747458 containerd[1906]: time="2025-12-16T12:29:36.747430493Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Dec 16 12:29:36.750851 containerd[1906]: time="2025-12-16T12:29:36.750805787Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:36.756691 containerd[1906]: time="2025-12-16T12:29:36.756466337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:36.757067 containerd[1906]: time="2025-12-16T12:29:36.757043129Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.656198609s" Dec 16 12:29:36.757214 containerd[1906]: time="2025-12-16T12:29:36.757068058Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 16 12:29:36.758202 containerd[1906]: time="2025-12-16T12:29:36.758029661Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 12:29:37.322210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount701971933.mount: Deactivated successfully. Dec 16 12:29:37.342510 containerd[1906]: time="2025-12-16T12:29:37.342459132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:29:37.345334 containerd[1906]: time="2025-12-16T12:29:37.345158695Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Dec 16 12:29:37.348485 containerd[1906]: time="2025-12-16T12:29:37.348458603Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:29:37.352942 containerd[1906]: time="2025-12-16T12:29:37.352916496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 12:29:37.353539 containerd[1906]: time="2025-12-16T12:29:37.353229777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 595.1747ms" Dec 16 12:29:37.353539 containerd[1906]: time="2025-12-16T12:29:37.353256041Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 16 12:29:37.353961 containerd[1906]: time="2025-12-16T12:29:37.353804897Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 16 12:29:38.007047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4250055349.mount: Deactivated successfully. Dec 16 12:29:40.081272 containerd[1906]: time="2025-12-16T12:29:40.080568331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:40.084177 containerd[1906]: time="2025-12-16T12:29:40.084138015Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Dec 16 12:29:40.088335 containerd[1906]: time="2025-12-16T12:29:40.088304820Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:40.156958 containerd[1906]: time="2025-12-16T12:29:40.156901750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:29:40.158448 containerd[1906]: time="2025-12-16T12:29:40.158410952Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.804580959s" Dec 16 12:29:40.158448 containerd[1906]: time="2025-12-16T12:29:40.158445209Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 16 12:29:42.579720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 16 12:29:42.583324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:42.724334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:42.731495 (kubelet)[2907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 12:29:42.769137 kubelet[2907]: E1216 12:29:42.769086 2907 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 12:29:42.772317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 12:29:42.772417 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 12:29:42.772670 systemd[1]: kubelet.service: Consumed 114ms CPU time, 106.8M memory peak. Dec 16 12:29:43.230228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:43.230632 systemd[1]: kubelet.service: Consumed 114ms CPU time, 106.8M memory peak. Dec 16 12:29:43.232476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:43.252668 systemd[1]: Reload requested from client PID 2921 ('systemctl') (unit session-9.scope)... Dec 16 12:29:43.252775 systemd[1]: Reloading... Dec 16 12:29:43.337182 zram_generator::config[2964]: No configuration found. Dec 16 12:29:43.500523 systemd[1]: Reloading finished in 247 ms. Dec 16 12:29:43.554518 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 16 12:29:43.554574 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 16 12:29:43.554768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:43.554806 systemd[1]: kubelet.service: Consumed 73ms CPU time, 95M memory peak. Dec 16 12:29:43.555855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:43.774110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:43.785522 (kubelet)[3035]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:29:43.857818 kubelet[3035]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:29:43.857818 kubelet[3035]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:29:43.857818 kubelet[3035]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:29:43.857818 kubelet[3035]: I1216 12:29:43.856883 3035 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:29:44.298030 kubelet[3035]: I1216 12:29:44.297990 3035 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:29:44.298260 kubelet[3035]: I1216 12:29:44.298249 3035 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:29:44.298524 kubelet[3035]: I1216 12:29:44.298509 3035 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:29:44.312877 kubelet[3035]: I1216 12:29:44.312849 3035 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:29:44.315949 kubelet[3035]: E1216 12:29:44.315845 3035 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:29:44.322867 kubelet[3035]: I1216 12:29:44.322851 3035 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:29:44.325348 kubelet[3035]: I1216 12:29:44.325329 3035 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:29:44.325645 kubelet[3035]: I1216 12:29:44.325619 3035 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:29:44.325822 kubelet[3035]: I1216 12:29:44.325706 3035 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-e780e4b687","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:29:44.325944 kubelet[3035]: I1216 12:29:44.325932 3035 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:29:44.325993 kubelet[3035]: I1216 12:29:44.325987 3035 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:29:44.326147 kubelet[3035]: I1216 12:29:44.326136 3035 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:29:44.328699 kubelet[3035]: I1216 12:29:44.328681 3035 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:29:44.328789 kubelet[3035]: I1216 12:29:44.328780 3035 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:29:44.328856 kubelet[3035]: I1216 12:29:44.328849 3035 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:29:44.328907 kubelet[3035]: I1216 12:29:44.328899 3035 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:29:44.330986 kubelet[3035]: E1216 12:29:44.330946 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-e780e4b687&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:29:44.332892 kubelet[3035]: E1216 12:29:44.332866 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:29:44.332958 kubelet[3035]: I1216 12:29:44.332936 3035 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:29:44.333314 kubelet[3035]: I1216 12:29:44.333290 3035 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:29:44.333373 kubelet[3035]: W1216 12:29:44.333336 3035 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 12:29:44.335235 kubelet[3035]: I1216 12:29:44.335210 3035 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:29:44.335300 kubelet[3035]: I1216 12:29:44.335247 3035 server.go:1289] "Started kubelet" Dec 16 12:29:44.335433 kubelet[3035]: I1216 12:29:44.335412 3035 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:29:44.336066 kubelet[3035]: I1216 12:29:44.336049 3035 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:29:44.337965 kubelet[3035]: I1216 12:29:44.337906 3035 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:29:44.338235 kubelet[3035]: I1216 12:29:44.338214 3035 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:29:44.339677 kubelet[3035]: E1216 12:29:44.338316 3035 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-e780e4b687.1881b1f2945c5573 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-e780e4b687,UID:ci-4459.2.2-a-e780e4b687,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-e780e4b687,},FirstTimestamp:2025-12-16 12:29:44.335226227 +0000 UTC m=+0.546664221,LastTimestamp:2025-12-16 12:29:44.335226227 +0000 UTC m=+0.546664221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-e780e4b687,}" Dec 16 12:29:44.340377 kubelet[3035]: I1216 12:29:44.340347 3035 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:29:44.341822 kubelet[3035]: I1216 12:29:44.341351 3035 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:29:44.341822 kubelet[3035]: I1216 12:29:44.341451 3035 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:29:44.342298 kubelet[3035]: I1216 12:29:44.342276 3035 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:29:44.342356 kubelet[3035]: I1216 12:29:44.342318 3035 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:29:44.342734 kubelet[3035]: E1216 12:29:44.342705 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:29:44.343325 kubelet[3035]: E1216 12:29:44.343302 3035 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 12:29:44.343625 kubelet[3035]: E1216 12:29:44.343599 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:44.343960 kubelet[3035]: I1216 12:29:44.343941 3035 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:29:44.344143 kubelet[3035]: I1216 12:29:44.344012 3035 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:29:44.344614 kubelet[3035]: E1216 12:29:44.344205 3035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-e780e4b687?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="200ms" Dec 16 12:29:44.344899 kubelet[3035]: I1216 12:29:44.344880 3035 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:29:44.346621 kubelet[3035]: I1216 12:29:44.346600 3035 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:29:44.374614 kubelet[3035]: I1216 12:29:44.374586 3035 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:29:44.374614 kubelet[3035]: I1216 12:29:44.374606 3035 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:29:44.374764 kubelet[3035]: I1216 12:29:44.374624 3035 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:29:44.385009 kubelet[3035]: I1216 12:29:44.384357 3035 policy_none.go:49] "None policy: Start" Dec 16 12:29:44.385009 kubelet[3035]: I1216 12:29:44.384382 3035 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:29:44.385009 kubelet[3035]: I1216 12:29:44.384391 3035 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:29:44.385341 kubelet[3035]: I1216 12:29:44.385313 3035 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:29:44.385341 kubelet[3035]: I1216 12:29:44.385334 3035 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:29:44.385407 kubelet[3035]: I1216 12:29:44.385351 3035 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:29:44.385407 kubelet[3035]: I1216 12:29:44.385358 3035 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:29:44.385407 kubelet[3035]: E1216 12:29:44.385393 3035 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:29:44.386479 kubelet[3035]: E1216 12:29:44.385896 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:29:44.397739 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 12:29:44.407450 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 12:29:44.410461 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 12:29:44.421767 kubelet[3035]: E1216 12:29:44.421743 3035 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:29:44.422029 kubelet[3035]: I1216 12:29:44.422013 3035 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:29:44.422122 kubelet[3035]: I1216 12:29:44.422089 3035 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:29:44.422561 kubelet[3035]: I1216 12:29:44.422543 3035 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:29:44.423877 kubelet[3035]: E1216 12:29:44.423862 3035 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:29:44.424027 kubelet[3035]: E1216 12:29:44.424003 3035 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:44.498449 systemd[1]: Created slice kubepods-burstable-podbd449fe9a81e0112a44ebe3f32345186.slice - libcontainer container kubepods-burstable-podbd449fe9a81e0112a44ebe3f32345186.slice. Dec 16 12:29:44.512838 kubelet[3035]: E1216 12:29:44.512730 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-e780e4b687\" not found" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.516743 systemd[1]: Created slice kubepods-burstable-pod038dcda36889cfeeb9d5354713f6123f.slice - libcontainer container kubepods-burstable-pod038dcda36889cfeeb9d5354713f6123f.slice. Dec 16 12:29:44.518917 kubelet[3035]: E1216 12:29:44.518901 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-e780e4b687\" not found" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.519868 systemd[1]: Created slice kubepods-burstable-pod0f62a6e6442259888ad77f9d1cf3765c.slice - libcontainer container kubepods-burstable-pod0f62a6e6442259888ad77f9d1cf3765c.slice. Dec 16 12:29:44.521762 kubelet[3035]: E1216 12:29:44.521729 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-e780e4b687\" not found" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.523640 kubelet[3035]: I1216 12:29:44.523627 3035 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.524108 kubelet[3035]: E1216 12:29:44.524087 3035 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.544567 kubelet[3035]: E1216 12:29:44.544530 3035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-e780e4b687?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="400ms" Dec 16 12:29:44.644168 kubelet[3035]: I1216 12:29:44.644110 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd449fe9a81e0112a44ebe3f32345186-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-e780e4b687\" (UID: \"bd449fe9a81e0112a44ebe3f32345186\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.644480 kubelet[3035]: I1216 12:29:44.644350 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/038dcda36889cfeeb9d5354713f6123f-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-e780e4b687\" (UID: \"038dcda36889cfeeb9d5354713f6123f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.644480 kubelet[3035]: I1216 12:29:44.644377 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/038dcda36889cfeeb9d5354713f6123f-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-e780e4b687\" (UID: \"038dcda36889cfeeb9d5354713f6123f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.644480 kubelet[3035]: I1216 12:29:44.644412 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/038dcda36889cfeeb9d5354713f6123f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-e780e4b687\" (UID: \"038dcda36889cfeeb9d5354713f6123f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.644480 kubelet[3035]: I1216 12:29:44.644427 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f62a6e6442259888ad77f9d1cf3765c-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-e780e4b687\" (UID: \"0f62a6e6442259888ad77f9d1cf3765c\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.644480 kubelet[3035]: I1216 12:29:44.644437 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd449fe9a81e0112a44ebe3f32345186-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-e780e4b687\" (UID: \"bd449fe9a81e0112a44ebe3f32345186\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.644600 kubelet[3035]: I1216 12:29:44.644448 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/038dcda36889cfeeb9d5354713f6123f-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-e780e4b687\" (UID: \"038dcda36889cfeeb9d5354713f6123f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.644600 kubelet[3035]: I1216 12:29:44.644460 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/038dcda36889cfeeb9d5354713f6123f-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-e780e4b687\" (UID: \"038dcda36889cfeeb9d5354713f6123f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.644600 kubelet[3035]: I1216 12:29:44.644469 3035 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd449fe9a81e0112a44ebe3f32345186-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-e780e4b687\" (UID: \"bd449fe9a81e0112a44ebe3f32345186\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.725971 kubelet[3035]: I1216 12:29:44.725931 3035 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.726492 kubelet[3035]: E1216 12:29:44.726462 3035 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:44.814889 containerd[1906]: time="2025-12-16T12:29:44.814831922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-e780e4b687,Uid:bd449fe9a81e0112a44ebe3f32345186,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:44.820581 containerd[1906]: time="2025-12-16T12:29:44.820555347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-e780e4b687,Uid:038dcda36889cfeeb9d5354713f6123f,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:44.823210 containerd[1906]: time="2025-12-16T12:29:44.823186917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-e780e4b687,Uid:0f62a6e6442259888ad77f9d1cf3765c,Namespace:kube-system,Attempt:0,}" Dec 16 12:29:44.913910 containerd[1906]: time="2025-12-16T12:29:44.912211626Z" level=info msg="connecting to shim 6a955c1addb90f926e07c85441a5e86ae16af61e083e59996366e8d8e2d0c321" address="unix:///run/containerd/s/027cd439190f4c8e9abde699961b14074916d9d86927bea86711751a0e973143" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:44.925405 containerd[1906]: time="2025-12-16T12:29:44.925325042Z" level=info msg="connecting to shim df9ad3559ec3f8157d100792244752c671d7dc6d2e5917e12e0c8ccb293a58b9" address="unix:///run/containerd/s/67cb04ceb0147746950400b80077d5d298351ab6bb44fadd06c67969c5ea22c2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:44.934866 containerd[1906]: time="2025-12-16T12:29:44.934828636Z" level=info msg="connecting to shim 892cfe4baea3631a0bdb6fd385977cdac11c5069c0f82b4b7733a2ae19b55f0c" address="unix:///run/containerd/s/80713c32ddfb8c83f6f88746324c7b8ecab87b4c80e55b0c4c8b8626ff3c858c" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:29:44.942440 systemd[1]: Started cri-containerd-6a955c1addb90f926e07c85441a5e86ae16af61e083e59996366e8d8e2d0c321.scope - libcontainer container 6a955c1addb90f926e07c85441a5e86ae16af61e083e59996366e8d8e2d0c321. Dec 16 12:29:44.945315 kubelet[3035]: E1216 12:29:44.945287 3035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-e780e4b687?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="800ms" Dec 16 12:29:44.955552 systemd[1]: Started cri-containerd-df9ad3559ec3f8157d100792244752c671d7dc6d2e5917e12e0c8ccb293a58b9.scope - libcontainer container df9ad3559ec3f8157d100792244752c671d7dc6d2e5917e12e0c8ccb293a58b9. Dec 16 12:29:44.959899 systemd[1]: Started cri-containerd-892cfe4baea3631a0bdb6fd385977cdac11c5069c0f82b4b7733a2ae19b55f0c.scope - libcontainer container 892cfe4baea3631a0bdb6fd385977cdac11c5069c0f82b4b7733a2ae19b55f0c. Dec 16 12:29:44.999673 containerd[1906]: time="2025-12-16T12:29:44.999541185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.2-a-e780e4b687,Uid:0f62a6e6442259888ad77f9d1cf3765c,Namespace:kube-system,Attempt:0,} returns sandbox id \"df9ad3559ec3f8157d100792244752c671d7dc6d2e5917e12e0c8ccb293a58b9\"" Dec 16 12:29:45.012197 containerd[1906]: time="2025-12-16T12:29:45.012128730Z" level=info msg="CreateContainer within sandbox \"df9ad3559ec3f8157d100792244752c671d7dc6d2e5917e12e0c8ccb293a58b9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 12:29:45.012503 containerd[1906]: time="2025-12-16T12:29:45.012476179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.2-a-e780e4b687,Uid:bd449fe9a81e0112a44ebe3f32345186,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a955c1addb90f926e07c85441a5e86ae16af61e083e59996366e8d8e2d0c321\"" Dec 16 12:29:45.033320 containerd[1906]: time="2025-12-16T12:29:45.033287531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.2-a-e780e4b687,Uid:038dcda36889cfeeb9d5354713f6123f,Namespace:kube-system,Attempt:0,} returns sandbox id \"892cfe4baea3631a0bdb6fd385977cdac11c5069c0f82b4b7733a2ae19b55f0c\"" Dec 16 12:29:45.036483 containerd[1906]: time="2025-12-16T12:29:45.036379794Z" level=info msg="Container 3bdc8b1509ab9e88974e95bdc922796cf6e9cb62fd81a8f33895a70514482a48: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:45.043697 containerd[1906]: time="2025-12-16T12:29:45.043665502Z" level=info msg="CreateContainer within sandbox \"6a955c1addb90f926e07c85441a5e86ae16af61e083e59996366e8d8e2d0c321\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 12:29:45.049503 containerd[1906]: time="2025-12-16T12:29:45.049478721Z" level=info msg="CreateContainer within sandbox \"892cfe4baea3631a0bdb6fd385977cdac11c5069c0f82b4b7733a2ae19b55f0c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 12:29:45.099219 containerd[1906]: time="2025-12-16T12:29:45.099177586Z" level=info msg="CreateContainer within sandbox \"df9ad3559ec3f8157d100792244752c671d7dc6d2e5917e12e0c8ccb293a58b9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3bdc8b1509ab9e88974e95bdc922796cf6e9cb62fd81a8f33895a70514482a48\"" Dec 16 12:29:45.099837 containerd[1906]: time="2025-12-16T12:29:45.099813476Z" level=info msg="StartContainer for \"3bdc8b1509ab9e88974e95bdc922796cf6e9cb62fd81a8f33895a70514482a48\"" Dec 16 12:29:45.100834 containerd[1906]: time="2025-12-16T12:29:45.100808968Z" level=info msg="connecting to shim 3bdc8b1509ab9e88974e95bdc922796cf6e9cb62fd81a8f33895a70514482a48" address="unix:///run/containerd/s/67cb04ceb0147746950400b80077d5d298351ab6bb44fadd06c67969c5ea22c2" protocol=ttrpc version=3 Dec 16 12:29:45.116272 systemd[1]: Started cri-containerd-3bdc8b1509ab9e88974e95bdc922796cf6e9cb62fd81a8f33895a70514482a48.scope - libcontainer container 3bdc8b1509ab9e88974e95bdc922796cf6e9cb62fd81a8f33895a70514482a48. Dec 16 12:29:45.128712 kubelet[3035]: I1216 12:29:45.128686 3035 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:45.129087 kubelet[3035]: E1216 12:29:45.129047 3035 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:45.209274 kubelet[3035]: E1216 12:29:45.191522 3035 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.38:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.38:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.2-a-e780e4b687.1881b1f2945c5573 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.2-a-e780e4b687,UID:ci-4459.2.2-a-e780e4b687,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.2-a-e780e4b687,},FirstTimestamp:2025-12-16 12:29:44.335226227 +0000 UTC m=+0.546664221,LastTimestamp:2025-12-16 12:29:44.335226227 +0000 UTC m=+0.546664221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.2-a-e780e4b687,}" Dec 16 12:29:45.259561 containerd[1906]: time="2025-12-16T12:29:45.259425358Z" level=info msg="StartContainer for \"3bdc8b1509ab9e88974e95bdc922796cf6e9cb62fd81a8f33895a70514482a48\" returns successfully" Dec 16 12:29:45.394764 kubelet[3035]: E1216 12:29:45.394730 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-e780e4b687\" not found" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:45.476902 kubelet[3035]: E1216 12:29:45.476750 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:29:45.528550 kubelet[3035]: E1216 12:29:45.528501 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:29:45.607166 containerd[1906]: time="2025-12-16T12:29:45.607112678Z" level=info msg="Container eb718674832c114572d8bf37f7eab89c797d91c6c06325424e2e942b7de65470: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:46.257098 kubelet[3035]: E1216 12:29:45.746306 3035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-e780e4b687?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="1.6s" Dec 16 12:29:46.257098 kubelet[3035]: E1216 12:29:45.850704 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:29:46.257098 kubelet[3035]: I1216 12:29:45.931527 3035 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:46.257098 kubelet[3035]: E1216 12:29:45.931861 3035 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:46.257098 kubelet[3035]: E1216 12:29:45.933321 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-e780e4b687&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:29:46.397944 kubelet[3035]: E1216 12:29:46.397797 3035 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.2-a-e780e4b687\" not found" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:46.406183 kubelet[3035]: E1216 12:29:46.406103 3035 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.38:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 16 12:29:46.814366 containerd[1906]: time="2025-12-16T12:29:46.814302290Z" level=info msg="Container e1031a0b645607d839c143a993fe5fc120017e163e3d0ab397fcc00cb0e801b4: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:29:47.347502 kubelet[3035]: E1216 12:29:47.347456 3035 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.2-a-e780e4b687?timeout=10s\": dial tcp 10.200.20.38:6443: connect: connection refused" interval="3.2s" Dec 16 12:29:47.534194 kubelet[3035]: I1216 12:29:47.534146 3035 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:47.534534 kubelet[3035]: E1216 12:29:47.534508 3035 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.38:6443/api/v1/nodes\": dial tcp 10.200.20.38:6443: connect: connection refused" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:47.583498 kubelet[3035]: E1216 12:29:47.583458 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.38:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 16 12:29:47.998658 kubelet[3035]: E1216 12:29:47.998611 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.38:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 16 12:29:48.003380 kubelet[3035]: E1216 12:29:48.003342 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.38:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.2-a-e780e4b687&limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 16 12:29:48.074455 kubelet[3035]: E1216 12:29:48.074409 3035 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.38:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.38:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 16 12:29:48.505095 containerd[1906]: time="2025-12-16T12:29:48.505045679Z" level=info msg="CreateContainer within sandbox \"6a955c1addb90f926e07c85441a5e86ae16af61e083e59996366e8d8e2d0c321\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e1031a0b645607d839c143a993fe5fc120017e163e3d0ab397fcc00cb0e801b4\"" Dec 16 12:29:48.506042 containerd[1906]: time="2025-12-16T12:29:48.505995738Z" level=info msg="StartContainer for \"e1031a0b645607d839c143a993fe5fc120017e163e3d0ab397fcc00cb0e801b4\"" Dec 16 12:29:48.507088 containerd[1906]: time="2025-12-16T12:29:48.507026471Z" level=info msg="CreateContainer within sandbox \"892cfe4baea3631a0bdb6fd385977cdac11c5069c0f82b4b7733a2ae19b55f0c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eb718674832c114572d8bf37f7eab89c797d91c6c06325424e2e942b7de65470\"" Dec 16 12:29:48.508704 containerd[1906]: time="2025-12-16T12:29:48.508594859Z" level=info msg="connecting to shim e1031a0b645607d839c143a993fe5fc120017e163e3d0ab397fcc00cb0e801b4" address="unix:///run/containerd/s/027cd439190f4c8e9abde699961b14074916d9d86927bea86711751a0e973143" protocol=ttrpc version=3 Dec 16 12:29:48.508977 containerd[1906]: time="2025-12-16T12:29:48.508959973Z" level=info msg="StartContainer for \"eb718674832c114572d8bf37f7eab89c797d91c6c06325424e2e942b7de65470\"" Dec 16 12:29:48.510703 containerd[1906]: time="2025-12-16T12:29:48.510665181Z" level=info msg="connecting to shim eb718674832c114572d8bf37f7eab89c797d91c6c06325424e2e942b7de65470" address="unix:///run/containerd/s/80713c32ddfb8c83f6f88746324c7b8ecab87b4c80e55b0c4c8b8626ff3c858c" protocol=ttrpc version=3 Dec 16 12:29:48.534279 systemd[1]: Started cri-containerd-e1031a0b645607d839c143a993fe5fc120017e163e3d0ab397fcc00cb0e801b4.scope - libcontainer container e1031a0b645607d839c143a993fe5fc120017e163e3d0ab397fcc00cb0e801b4. Dec 16 12:29:48.536430 systemd[1]: Started cri-containerd-eb718674832c114572d8bf37f7eab89c797d91c6c06325424e2e942b7de65470.scope - libcontainer container eb718674832c114572d8bf37f7eab89c797d91c6c06325424e2e942b7de65470. Dec 16 12:29:49.912280 kubelet[3035]: E1216 12:29:49.912232 3035 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.2.2-a-e780e4b687" not found Dec 16 12:29:50.271839 kubelet[3035]: E1216 12:29:50.271724 3035 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.2.2-a-e780e4b687" not found Dec 16 12:29:50.551959 kubelet[3035]: E1216 12:29:50.551747 3035 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.2-a-e780e4b687\" not found" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:50.705841 kubelet[3035]: E1216 12:29:50.705790 3035 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4459.2.2-a-e780e4b687" not found Dec 16 12:29:50.736580 kubelet[3035]: I1216 12:29:50.736553 3035 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:50.748988 kubelet[3035]: I1216 12:29:50.748953 3035 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:50.748988 kubelet[3035]: E1216 12:29:50.748987 3035 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.2.2-a-e780e4b687\": node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:50.759332 kubelet[3035]: E1216 12:29:50.759308 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:50.860141 kubelet[3035]: E1216 12:29:50.860105 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:50.960617 kubelet[3035]: E1216 12:29:50.960578 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:51.061518 kubelet[3035]: E1216 12:29:51.061472 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:51.162309 kubelet[3035]: E1216 12:29:51.162190 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:51.263089 kubelet[3035]: E1216 12:29:51.263036 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:51.363501 kubelet[3035]: E1216 12:29:51.363455 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:51.465116 kubelet[3035]: E1216 12:29:51.464320 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:51.564956 kubelet[3035]: E1216 12:29:51.564908 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:51.665593 kubelet[3035]: E1216 12:29:51.665549 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:51.766342 kubelet[3035]: E1216 12:29:51.766226 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:51.867223 kubelet[3035]: E1216 12:29:51.867171 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:51.968058 kubelet[3035]: E1216 12:29:51.968005 3035 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:52.048310 kubelet[3035]: I1216 12:29:52.047969 3035 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:52.058763 kubelet[3035]: I1216 12:29:52.058740 3035 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:29:52.059077 kubelet[3035]: I1216 12:29:52.059037 3035 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:52.070166 kubelet[3035]: I1216 12:29:52.070042 3035 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:29:52.071492 kubelet[3035]: I1216 12:29:52.071299 3035 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:52.087200 kubelet[3035]: I1216 12:29:52.086918 3035 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:29:52.335261 kubelet[3035]: I1216 12:29:52.335203 3035 apiserver.go:52] "Watching apiserver" Dec 16 12:29:52.342769 kubelet[3035]: I1216 12:29:52.342741 3035 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:29:52.365275 systemd[1]: Reload requested from client PID 3316 ('systemctl') (unit session-9.scope)... Dec 16 12:29:52.365293 systemd[1]: Reloading... Dec 16 12:29:52.449324 zram_generator::config[3375]: No configuration found. Dec 16 12:29:52.604775 systemd[1]: Reloading finished in 239 ms. Dec 16 12:29:52.625036 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:52.640026 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 12:29:52.640251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:52.640310 systemd[1]: kubelet.service: Consumed 740ms CPU time, 124.7M memory peak. Dec 16 12:29:52.641824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 12:29:58.373283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 12:29:58.375472 (kubelet)[3427]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 12:29:58.412325 kubelet[3427]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:29:58.412325 kubelet[3427]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 12:29:58.412325 kubelet[3427]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 12:29:58.412738 kubelet[3427]: I1216 12:29:58.412352 3427 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 12:29:58.417356 kubelet[3427]: I1216 12:29:58.417326 3427 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 16 12:29:58.417356 kubelet[3427]: I1216 12:29:58.417350 3427 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 12:29:58.417515 kubelet[3427]: I1216 12:29:58.417497 3427 server.go:956] "Client rotation is on, will bootstrap in background" Dec 16 12:29:58.418433 kubelet[3427]: I1216 12:29:58.418413 3427 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 16 12:29:58.420439 kubelet[3427]: I1216 12:29:58.420176 3427 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 12:29:58.424408 kubelet[3427]: I1216 12:29:58.424393 3427 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 12:29:58.432832 kubelet[3427]: I1216 12:29:58.432789 3427 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 12:29:58.433043 kubelet[3427]: I1216 12:29:58.433017 3427 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 12:29:58.433201 kubelet[3427]: I1216 12:29:58.433051 3427 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.2-a-e780e4b687","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 12:29:58.433201 kubelet[3427]: I1216 12:29:58.433194 3427 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 12:29:58.433201 kubelet[3427]: I1216 12:29:58.433201 3427 container_manager_linux.go:303] "Creating device plugin manager" Dec 16 12:29:58.433388 kubelet[3427]: I1216 12:29:58.433279 3427 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:29:58.433483 kubelet[3427]: I1216 12:29:58.433457 3427 kubelet.go:480] "Attempting to sync node with API server" Dec 16 12:29:58.433483 kubelet[3427]: I1216 12:29:58.433483 3427 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 12:29:58.433483 kubelet[3427]: I1216 12:29:58.433502 3427 kubelet.go:386] "Adding apiserver pod source" Dec 16 12:29:58.433483 kubelet[3427]: I1216 12:29:58.433515 3427 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 12:29:58.436202 kubelet[3427]: I1216 12:29:58.436187 3427 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 12:29:58.436653 kubelet[3427]: I1216 12:29:58.436637 3427 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 16 12:29:58.439678 kubelet[3427]: I1216 12:29:58.439635 3427 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 12:29:58.439770 kubelet[3427]: I1216 12:29:58.439761 3427 server.go:1289] "Started kubelet" Dec 16 12:29:58.441386 kubelet[3427]: I1216 12:29:58.441099 3427 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 12:29:58.451075 kubelet[3427]: I1216 12:29:58.450846 3427 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 12:29:58.452180 kubelet[3427]: I1216 12:29:58.452133 3427 server.go:317] "Adding debug handlers to kubelet server" Dec 16 12:29:58.454302 kubelet[3427]: I1216 12:29:58.454285 3427 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 12:29:58.454544 kubelet[3427]: E1216 12:29:58.454526 3427 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.2-a-e780e4b687\" not found" Dec 16 12:29:58.455025 kubelet[3427]: I1216 12:29:58.454762 3427 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 12:29:58.461495 kubelet[3427]: I1216 12:29:58.455208 3427 reconciler.go:26] "Reconciler: start to sync state" Dec 16 12:29:58.461495 kubelet[3427]: I1216 12:29:58.457337 3427 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 12:29:58.461495 kubelet[3427]: I1216 12:29:58.461240 3427 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 12:29:58.461495 kubelet[3427]: I1216 12:29:58.457492 3427 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 12:29:58.464134 kubelet[3427]: I1216 12:29:58.464115 3427 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 12:29:58.467380 kubelet[3427]: I1216 12:29:58.467012 3427 factory.go:223] Registration of the containerd container factory successfully Dec 16 12:29:58.467652 kubelet[3427]: I1216 12:29:58.467636 3427 factory.go:223] Registration of the systemd container factory successfully Dec 16 12:29:58.469291 kubelet[3427]: I1216 12:29:58.469270 3427 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 16 12:29:58.473792 kubelet[3427]: I1216 12:29:58.473761 3427 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 16 12:29:58.473792 kubelet[3427]: I1216 12:29:58.473784 3427 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 16 12:29:58.473877 kubelet[3427]: I1216 12:29:58.473807 3427 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 12:29:58.473877 kubelet[3427]: I1216 12:29:58.473812 3427 kubelet.go:2436] "Starting kubelet main sync loop" Dec 16 12:29:58.473877 kubelet[3427]: E1216 12:29:58.473846 3427 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 12:29:58.498334 containerd[1906]: time="2025-12-16T12:29:58.498290864Z" level=info msg="StartContainer for \"e1031a0b645607d839c143a993fe5fc120017e163e3d0ab397fcc00cb0e801b4\" returns successfully" Dec 16 12:29:58.500604 containerd[1906]: time="2025-12-16T12:29:58.500562232Z" level=info msg="StartContainer for \"eb718674832c114572d8bf37f7eab89c797d91c6c06325424e2e942b7de65470\" returns successfully" Dec 16 12:29:58.514079 kubelet[3427]: I1216 12:29:58.514060 3427 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 12:29:58.514535 kubelet[3427]: I1216 12:29:58.514262 3427 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 12:29:58.514535 kubelet[3427]: I1216 12:29:58.514283 3427 state_mem.go:36] "Initialized new in-memory state store" Dec 16 12:29:58.514535 kubelet[3427]: I1216 12:29:58.514380 3427 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 12:29:58.514535 kubelet[3427]: I1216 12:29:58.514388 3427 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 12:29:58.514535 kubelet[3427]: I1216 12:29:58.514400 3427 policy_none.go:49] "None policy: Start" Dec 16 12:29:58.514535 kubelet[3427]: I1216 12:29:58.514408 3427 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 12:29:58.514535 kubelet[3427]: I1216 12:29:58.514414 3427 state_mem.go:35] "Initializing new in-memory state store" Dec 16 12:29:58.514535 kubelet[3427]: I1216 12:29:58.514477 3427 state_mem.go:75] "Updated machine memory state" Dec 16 12:29:58.518063 kubelet[3427]: E1216 12:29:58.518034 3427 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 16 12:29:58.518218 kubelet[3427]: I1216 12:29:58.518202 3427 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 12:29:58.518270 kubelet[3427]: I1216 12:29:58.518217 3427 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 12:29:58.518760 kubelet[3427]: I1216 12:29:58.518743 3427 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 12:29:58.520375 kubelet[3427]: E1216 12:29:58.519599 3427 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 12:29:58.575413 kubelet[3427]: I1216 12:29:58.575256 3427 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.575663 kubelet[3427]: I1216 12:29:58.575259 3427 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.577079 kubelet[3427]: I1216 12:29:58.575329 3427 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.587182 kubelet[3427]: I1216 12:29:58.587051 3427 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:29:58.587182 kubelet[3427]: E1216 12:29:58.587104 3427 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.2-a-e780e4b687\" already exists" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.587717 kubelet[3427]: I1216 12:29:58.587701 3427 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:29:58.587820 kubelet[3427]: E1216 12:29:58.587809 3427 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-e780e4b687\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.587888 kubelet[3427]: I1216 12:29:58.587713 3427 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:29:58.587948 kubelet[3427]: E1216 12:29:58.587914 3427 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-e780e4b687\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.621310 kubelet[3427]: I1216 12:29:58.620709 3427 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.631910 kubelet[3427]: I1216 12:29:58.631827 3427 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.632060 kubelet[3427]: I1216 12:29:58.632049 3427 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.632130 kubelet[3427]: I1216 12:29:58.632122 3427 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 12:29:58.632589 containerd[1906]: time="2025-12-16T12:29:58.632549599Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 12:29:58.632898 kubelet[3427]: I1216 12:29:58.632889 3427 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 12:29:58.763020 kubelet[3427]: I1216 12:29:58.762986 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/038dcda36889cfeeb9d5354713f6123f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.2-a-e780e4b687\" (UID: \"038dcda36889cfeeb9d5354713f6123f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.763020 kubelet[3427]: I1216 12:29:58.763020 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f62a6e6442259888ad77f9d1cf3765c-kubeconfig\") pod \"kube-scheduler-ci-4459.2.2-a-e780e4b687\" (UID: \"0f62a6e6442259888ad77f9d1cf3765c\") " pod="kube-system/kube-scheduler-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.763196 kubelet[3427]: I1216 12:29:58.763034 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bd449fe9a81e0112a44ebe3f32345186-ca-certs\") pod \"kube-apiserver-ci-4459.2.2-a-e780e4b687\" (UID: \"bd449fe9a81e0112a44ebe3f32345186\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.763196 kubelet[3427]: I1216 12:29:58.763046 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bd449fe9a81e0112a44ebe3f32345186-k8s-certs\") pod \"kube-apiserver-ci-4459.2.2-a-e780e4b687\" (UID: \"bd449fe9a81e0112a44ebe3f32345186\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.763196 kubelet[3427]: I1216 12:29:58.763057 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/038dcda36889cfeeb9d5354713f6123f-ca-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-e780e4b687\" (UID: \"038dcda36889cfeeb9d5354713f6123f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.763196 kubelet[3427]: I1216 12:29:58.763069 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/038dcda36889cfeeb9d5354713f6123f-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.2-a-e780e4b687\" (UID: \"038dcda36889cfeeb9d5354713f6123f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.763196 kubelet[3427]: I1216 12:29:58.763077 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/038dcda36889cfeeb9d5354713f6123f-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.2-a-e780e4b687\" (UID: \"038dcda36889cfeeb9d5354713f6123f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.763285 kubelet[3427]: I1216 12:29:58.763086 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/038dcda36889cfeeb9d5354713f6123f-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.2-a-e780e4b687\" (UID: \"038dcda36889cfeeb9d5354713f6123f\") " pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:58.763285 kubelet[3427]: I1216 12:29:58.763098 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bd449fe9a81e0112a44ebe3f32345186-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.2-a-e780e4b687\" (UID: \"bd449fe9a81e0112a44ebe3f32345186\") " pod="kube-system/kube-apiserver-ci-4459.2.2-a-e780e4b687" Dec 16 12:29:59.436306 kubelet[3427]: I1216 12:29:59.436255 3427 apiserver.go:52] "Watching apiserver" Dec 16 12:29:59.461735 kubelet[3427]: I1216 12:29:59.461679 3427 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 12:29:59.466166 kubelet[3427]: I1216 12:29:59.465923 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88cda4ba-9481-4ed3-ab05-307c926ef60a-xtables-lock\") pod \"kube-proxy-k42l2\" (UID: \"88cda4ba-9481-4ed3-ab05-307c926ef60a\") " pod="kube-system/kube-proxy-k42l2" Dec 16 12:29:59.466166 kubelet[3427]: I1216 12:29:59.465966 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88cda4ba-9481-4ed3-ab05-307c926ef60a-lib-modules\") pod \"kube-proxy-k42l2\" (UID: \"88cda4ba-9481-4ed3-ab05-307c926ef60a\") " pod="kube-system/kube-proxy-k42l2" Dec 16 12:29:59.466166 kubelet[3427]: I1216 12:29:59.465980 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/88cda4ba-9481-4ed3-ab05-307c926ef60a-kube-proxy\") pod \"kube-proxy-k42l2\" (UID: \"88cda4ba-9481-4ed3-ab05-307c926ef60a\") " pod="kube-system/kube-proxy-k42l2" Dec 16 12:29:59.466166 kubelet[3427]: I1216 12:29:59.465991 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rxcj\" (UniqueName: \"kubernetes.io/projected/88cda4ba-9481-4ed3-ab05-307c926ef60a-kube-api-access-2rxcj\") pod \"kube-proxy-k42l2\" (UID: \"88cda4ba-9481-4ed3-ab05-307c926ef60a\") " pod="kube-system/kube-proxy-k42l2" Dec 16 12:29:59.566908 kubelet[3427]: E1216 12:29:59.566856 3427 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered Dec 16 12:29:59.567065 kubelet[3427]: E1216 12:29:59.566954 3427 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/88cda4ba-9481-4ed3-ab05-307c926ef60a-kube-proxy podName:88cda4ba-9481-4ed3-ab05-307c926ef60a nodeName:}" failed. No retries permitted until 2025-12-16 12:30:00.06693412 +0000 UTC m=+1.688191685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/88cda4ba-9481-4ed3-ab05-307c926ef60a-kube-proxy") pod "kube-proxy-k42l2" (UID: "88cda4ba-9481-4ed3-ab05-307c926ef60a") : object "kube-system"/"kube-proxy" not registered Dec 16 12:29:59.574289 kubelet[3427]: E1216 12:29:59.574211 3427 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: object "kube-system"/"kube-root-ca.crt" not registered Dec 16 12:29:59.574651 kubelet[3427]: E1216 12:29:59.574262 3427 projected.go:194] Error preparing data for projected volume kube-api-access-2rxcj for pod kube-system/kube-proxy-k42l2: object "kube-system"/"kube-root-ca.crt" not registered Dec 16 12:29:59.574824 kubelet[3427]: E1216 12:29:59.574804 3427 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/88cda4ba-9481-4ed3-ab05-307c926ef60a-kube-api-access-2rxcj podName:88cda4ba-9481-4ed3-ab05-307c926ef60a nodeName:}" failed. No retries permitted until 2025-12-16 12:30:00.074493579 +0000 UTC m=+1.695751184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2rxcj" (UniqueName: "kubernetes.io/projected/88cda4ba-9481-4ed3-ab05-307c926ef60a-kube-api-access-2rxcj") pod "kube-proxy-k42l2" (UID: "88cda4ba-9481-4ed3-ab05-307c926ef60a") : object "kube-system"/"kube-root-ca.crt" not registered Dec 16 12:30:00.069899 kubelet[3427]: E1216 12:30:00.069853 3427 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered Dec 16 12:30:00.070065 kubelet[3427]: E1216 12:30:00.069929 3427 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/88cda4ba-9481-4ed3-ab05-307c926ef60a-kube-proxy podName:88cda4ba-9481-4ed3-ab05-307c926ef60a nodeName:}" failed. No retries permitted until 2025-12-16 12:30:01.069914267 +0000 UTC m=+2.691171832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/88cda4ba-9481-4ed3-ab05-307c926ef60a-kube-proxy") pod "kube-proxy-k42l2" (UID: "88cda4ba-9481-4ed3-ab05-307c926ef60a") : object "kube-system"/"kube-proxy" not registered Dec 16 12:30:00.170841 kubelet[3427]: E1216 12:30:00.170796 3427 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: object "kube-system"/"kube-root-ca.crt" not registered Dec 16 12:30:00.170841 kubelet[3427]: E1216 12:30:00.170834 3427 projected.go:194] Error preparing data for projected volume kube-api-access-2rxcj for pod kube-system/kube-proxy-k42l2: object "kube-system"/"kube-root-ca.crt" not registered Dec 16 12:30:00.171022 kubelet[3427]: E1216 12:30:00.170882 3427 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/88cda4ba-9481-4ed3-ab05-307c926ef60a-kube-api-access-2rxcj podName:88cda4ba-9481-4ed3-ab05-307c926ef60a nodeName:}" failed. No retries permitted until 2025-12-16 12:30:01.170868936 +0000 UTC m=+2.792126509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2rxcj" (UniqueName: "kubernetes.io/projected/88cda4ba-9481-4ed3-ab05-307c926ef60a-kube-api-access-2rxcj") pod "kube-proxy-k42l2" (UID: "88cda4ba-9481-4ed3-ab05-307c926ef60a") : object "kube-system"/"kube-root-ca.crt" not registered Dec 16 12:30:00.956995 kubelet[3427]: I1216 12:30:00.956488 3427 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.2-a-e780e4b687" Dec 16 12:30:00.957945 kubelet[3427]: I1216 12:30:00.957886 3427 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.2-a-e780e4b687" Dec 16 12:30:00.963230 systemd[1]: Created slice kubepods-besteffort-pod88cda4ba_9481_4ed3_ab05_307c926ef60a.slice - libcontainer container kubepods-besteffort-pod88cda4ba_9481_4ed3_ab05_307c926ef60a.slice. Dec 16 12:30:00.973137 kubelet[3427]: I1216 12:30:00.973107 3427 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:30:00.973226 kubelet[3427]: E1216 12:30:00.973172 3427 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.2-a-e780e4b687\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.2-a-e780e4b687" Dec 16 12:30:00.973336 kubelet[3427]: I1216 12:30:00.973318 3427 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Dec 16 12:30:00.973387 kubelet[3427]: E1216 12:30:00.973343 3427 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.2-a-e780e4b687\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.2-a-e780e4b687" Dec 16 12:30:00.992992 kubelet[3427]: I1216 12:30:00.992926 3427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.2-a-e780e4b687" podStartSLOduration=8.992913622 podStartE2EDuration="8.992913622s" podCreationTimestamp="2025-12-16 12:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:00.983890266 +0000 UTC m=+2.605147831" watchObservedRunningTime="2025-12-16 12:30:00.992913622 +0000 UTC m=+2.614171187" Dec 16 12:30:00.993124 kubelet[3427]: I1216 12:30:00.993044 3427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.2-a-e780e4b687" podStartSLOduration=8.993039649 podStartE2EDuration="8.993039649s" podCreationTimestamp="2025-12-16 12:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:00.992712576 +0000 UTC m=+2.613970141" watchObservedRunningTime="2025-12-16 12:30:00.993039649 +0000 UTC m=+2.614297222" Dec 16 12:30:01.021257 kubelet[3427]: I1216 12:30:01.020968 3427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.2-a-e780e4b687" podStartSLOduration=9.020953245 podStartE2EDuration="9.020953245s" podCreationTimestamp="2025-12-16 12:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:01.007104098 +0000 UTC m=+2.628361663" watchObservedRunningTime="2025-12-16 12:30:01.020953245 +0000 UTC m=+2.642210810" Dec 16 12:30:01.274625 containerd[1906]: time="2025-12-16T12:30:01.274509873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k42l2,Uid:88cda4ba-9481-4ed3-ab05-307c926ef60a,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:02.019826 systemd[1]: Created slice kubepods-besteffort-pode8b16bda_604c_42e4_8371_446303d9231f.slice - libcontainer container kubepods-besteffort-pode8b16bda_604c_42e4_8371_446303d9231f.slice. Dec 16 12:30:02.084157 kubelet[3427]: I1216 12:30:02.084057 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh46r\" (UniqueName: \"kubernetes.io/projected/e8b16bda-604c-42e4-8371-446303d9231f-kube-api-access-mh46r\") pod \"tigera-operator-7dcd859c48-94dn6\" (UID: \"e8b16bda-604c-42e4-8371-446303d9231f\") " pod="tigera-operator/tigera-operator-7dcd859c48-94dn6" Dec 16 12:30:02.084524 kubelet[3427]: I1216 12:30:02.084147 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e8b16bda-604c-42e4-8371-446303d9231f-var-lib-calico\") pod \"tigera-operator-7dcd859c48-94dn6\" (UID: \"e8b16bda-604c-42e4-8371-446303d9231f\") " pod="tigera-operator/tigera-operator-7dcd859c48-94dn6" Dec 16 12:30:02.323127 containerd[1906]: time="2025-12-16T12:30:02.322852295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-94dn6,Uid:e8b16bda-604c-42e4-8371-446303d9231f,Namespace:tigera-operator,Attempt:0,}" Dec 16 12:30:02.369841 containerd[1906]: time="2025-12-16T12:30:02.369794381Z" level=info msg="connecting to shim 0f3235f9443ae8ef64b27ec6f1e0542dd4a3763c95d4f6b635fbec4c39cebe53" address="unix:///run/containerd/s/971cdfad03bb6f83f019c13b515eb60fda6aa0fe1f73894f8c6cf12ed6acfa82" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:02.390287 systemd[1]: Started cri-containerd-0f3235f9443ae8ef64b27ec6f1e0542dd4a3763c95d4f6b635fbec4c39cebe53.scope - libcontainer container 0f3235f9443ae8ef64b27ec6f1e0542dd4a3763c95d4f6b635fbec4c39cebe53. Dec 16 12:30:02.456939 containerd[1906]: time="2025-12-16T12:30:02.456905391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k42l2,Uid:88cda4ba-9481-4ed3-ab05-307c926ef60a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f3235f9443ae8ef64b27ec6f1e0542dd4a3763c95d4f6b635fbec4c39cebe53\"" Dec 16 12:30:02.507475 containerd[1906]: time="2025-12-16T12:30:02.507438076Z" level=info msg="CreateContainer within sandbox \"0f3235f9443ae8ef64b27ec6f1e0542dd4a3763c95d4f6b635fbec4c39cebe53\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 12:30:02.816969 containerd[1906]: time="2025-12-16T12:30:02.816925710Z" level=info msg="connecting to shim 85d37995a9904ebbb06cdfadb6c64f2a3227d7832199ec0eafa8000b8cd940c0" address="unix:///run/containerd/s/025591eb3cd8770addf49332f9a63133e09196809291d4f636349710b0c93fdc" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:02.857019 containerd[1906]: time="2025-12-16T12:30:02.856912168Z" level=info msg="Container 7dc7e931e8f3c9eba403aeda94e8d4aafedb4d38ef9b9a8db05719255658a3d0: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:02.873281 systemd[1]: Started cri-containerd-85d37995a9904ebbb06cdfadb6c64f2a3227d7832199ec0eafa8000b8cd940c0.scope - libcontainer container 85d37995a9904ebbb06cdfadb6c64f2a3227d7832199ec0eafa8000b8cd940c0. Dec 16 12:30:02.965059 containerd[1906]: time="2025-12-16T12:30:02.964907357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-94dn6,Uid:e8b16bda-604c-42e4-8371-446303d9231f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"85d37995a9904ebbb06cdfadb6c64f2a3227d7832199ec0eafa8000b8cd940c0\"" Dec 16 12:30:02.967479 containerd[1906]: time="2025-12-16T12:30:02.967371586Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 16 12:30:03.061078 containerd[1906]: time="2025-12-16T12:30:03.060954036Z" level=info msg="CreateContainer within sandbox \"0f3235f9443ae8ef64b27ec6f1e0542dd4a3763c95d4f6b635fbec4c39cebe53\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7dc7e931e8f3c9eba403aeda94e8d4aafedb4d38ef9b9a8db05719255658a3d0\"" Dec 16 12:30:03.061647 containerd[1906]: time="2025-12-16T12:30:03.061616140Z" level=info msg="StartContainer for \"7dc7e931e8f3c9eba403aeda94e8d4aafedb4d38ef9b9a8db05719255658a3d0\"" Dec 16 12:30:03.063943 containerd[1906]: time="2025-12-16T12:30:03.063913365Z" level=info msg="connecting to shim 7dc7e931e8f3c9eba403aeda94e8d4aafedb4d38ef9b9a8db05719255658a3d0" address="unix:///run/containerd/s/971cdfad03bb6f83f019c13b515eb60fda6aa0fe1f73894f8c6cf12ed6acfa82" protocol=ttrpc version=3 Dec 16 12:30:03.084280 systemd[1]: Started cri-containerd-7dc7e931e8f3c9eba403aeda94e8d4aafedb4d38ef9b9a8db05719255658a3d0.scope - libcontainer container 7dc7e931e8f3c9eba403aeda94e8d4aafedb4d38ef9b9a8db05719255658a3d0. Dec 16 12:30:03.156178 containerd[1906]: time="2025-12-16T12:30:03.156124365Z" level=info msg="StartContainer for \"7dc7e931e8f3c9eba403aeda94e8d4aafedb4d38ef9b9a8db05719255658a3d0\" returns successfully" Dec 16 12:30:03.642416 kubelet[3427]: I1216 12:30:03.642355 3427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k42l2" podStartSLOduration=4.642338585 podStartE2EDuration="4.642338585s" podCreationTimestamp="2025-12-16 12:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:03.536271948 +0000 UTC m=+5.157529513" watchObservedRunningTime="2025-12-16 12:30:03.642338585 +0000 UTC m=+5.263596150" Dec 16 12:30:05.273621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2266245874.mount: Deactivated successfully. Dec 16 12:30:06.025978 containerd[1906]: time="2025-12-16T12:30:06.025913407Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:06.029190 containerd[1906]: time="2025-12-16T12:30:06.029138830Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 16 12:30:06.032357 containerd[1906]: time="2025-12-16T12:30:06.032326045Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:06.037722 containerd[1906]: time="2025-12-16T12:30:06.037308280Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:06.037722 containerd[1906]: time="2025-12-16T12:30:06.037612015Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.069874325s" Dec 16 12:30:06.037722 containerd[1906]: time="2025-12-16T12:30:06.037634056Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 16 12:30:06.046018 containerd[1906]: time="2025-12-16T12:30:06.045986957Z" level=info msg="CreateContainer within sandbox \"85d37995a9904ebbb06cdfadb6c64f2a3227d7832199ec0eafa8000b8cd940c0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 16 12:30:06.068940 containerd[1906]: time="2025-12-16T12:30:06.068533729Z" level=info msg="Container 303cf36fc252919365b10c5ce53dc6a1873c7b5b05c54b30a9cc7e9bcfa260cf: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:06.084217 containerd[1906]: time="2025-12-16T12:30:06.084126905Z" level=info msg="CreateContainer within sandbox \"85d37995a9904ebbb06cdfadb6c64f2a3227d7832199ec0eafa8000b8cd940c0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"303cf36fc252919365b10c5ce53dc6a1873c7b5b05c54b30a9cc7e9bcfa260cf\"" Dec 16 12:30:06.085400 containerd[1906]: time="2025-12-16T12:30:06.084768689Z" level=info msg="StartContainer for \"303cf36fc252919365b10c5ce53dc6a1873c7b5b05c54b30a9cc7e9bcfa260cf\"" Dec 16 12:30:06.085955 containerd[1906]: time="2025-12-16T12:30:06.085867652Z" level=info msg="connecting to shim 303cf36fc252919365b10c5ce53dc6a1873c7b5b05c54b30a9cc7e9bcfa260cf" address="unix:///run/containerd/s/025591eb3cd8770addf49332f9a63133e09196809291d4f636349710b0c93fdc" protocol=ttrpc version=3 Dec 16 12:30:06.104298 systemd[1]: Started cri-containerd-303cf36fc252919365b10c5ce53dc6a1873c7b5b05c54b30a9cc7e9bcfa260cf.scope - libcontainer container 303cf36fc252919365b10c5ce53dc6a1873c7b5b05c54b30a9cc7e9bcfa260cf. Dec 16 12:30:06.150487 containerd[1906]: time="2025-12-16T12:30:06.150450924Z" level=info msg="StartContainer for \"303cf36fc252919365b10c5ce53dc6a1873c7b5b05c54b30a9cc7e9bcfa260cf\" returns successfully" Dec 16 12:30:08.704874 kubelet[3427]: I1216 12:30:08.704813 3427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-94dn6" podStartSLOduration=4.633145768 podStartE2EDuration="7.704789912s" podCreationTimestamp="2025-12-16 12:30:01 +0000 UTC" firstStartedPulling="2025-12-16 12:30:02.966816524 +0000 UTC m=+4.588074089" lastFinishedPulling="2025-12-16 12:30:06.038460668 +0000 UTC m=+7.659718233" observedRunningTime="2025-12-16 12:30:06.541463214 +0000 UTC m=+8.162720779" watchObservedRunningTime="2025-12-16 12:30:08.704789912 +0000 UTC m=+10.326047477" Dec 16 12:30:11.198615 sudo[2387]: pam_unix(sudo:session): session closed for user root Dec 16 12:30:11.275631 sshd[2386]: Connection closed by 10.200.16.10 port 47756 Dec 16 12:30:11.277345 sshd-session[2383]: pam_unix(sshd:session): session closed for user core Dec 16 12:30:11.280781 systemd[1]: sshd@6-10.200.20.38:22-10.200.16.10:47756.service: Deactivated successfully. Dec 16 12:30:11.286515 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 12:30:11.286928 systemd[1]: session-9.scope: Consumed 3.911s CPU time, 222.1M memory peak. Dec 16 12:30:11.288920 systemd-logind[1872]: Session 9 logged out. Waiting for processes to exit. Dec 16 12:30:11.291845 systemd-logind[1872]: Removed session 9. Dec 16 12:30:18.207507 systemd[1]: Created slice kubepods-besteffort-podc3b0e5f1_73f7_417d_8264_78e4886694a8.slice - libcontainer container kubepods-besteffort-podc3b0e5f1_73f7_417d_8264_78e4886694a8.slice. Dec 16 12:30:18.284481 kubelet[3427]: I1216 12:30:18.284358 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3b0e5f1-73f7-417d-8264-78e4886694a8-tigera-ca-bundle\") pod \"calico-typha-6d488b7c8-ffg4b\" (UID: \"c3b0e5f1-73f7-417d-8264-78e4886694a8\") " pod="calico-system/calico-typha-6d488b7c8-ffg4b" Dec 16 12:30:18.284481 kubelet[3427]: I1216 12:30:18.284404 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg75t\" (UniqueName: \"kubernetes.io/projected/c3b0e5f1-73f7-417d-8264-78e4886694a8-kube-api-access-sg75t\") pod \"calico-typha-6d488b7c8-ffg4b\" (UID: \"c3b0e5f1-73f7-417d-8264-78e4886694a8\") " pod="calico-system/calico-typha-6d488b7c8-ffg4b" Dec 16 12:30:18.284481 kubelet[3427]: I1216 12:30:18.284420 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c3b0e5f1-73f7-417d-8264-78e4886694a8-typha-certs\") pod \"calico-typha-6d488b7c8-ffg4b\" (UID: \"c3b0e5f1-73f7-417d-8264-78e4886694a8\") " pod="calico-system/calico-typha-6d488b7c8-ffg4b" Dec 16 12:30:18.434771 systemd[1]: Created slice kubepods-besteffort-pod773c6761_1946_4cb1_88b8_b9afad062843.slice - libcontainer container kubepods-besteffort-pod773c6761_1946_4cb1_88b8_b9afad062843.slice. Dec 16 12:30:18.486228 kubelet[3427]: I1216 12:30:18.485891 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/773c6761-1946-4cb1-88b8-b9afad062843-tigera-ca-bundle\") pod \"calico-node-82mnf\" (UID: \"773c6761-1946-4cb1-88b8-b9afad062843\") " pod="calico-system/calico-node-82mnf" Dec 16 12:30:18.486228 kubelet[3427]: I1216 12:30:18.485933 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/773c6761-1946-4cb1-88b8-b9afad062843-cni-bin-dir\") pod \"calico-node-82mnf\" (UID: \"773c6761-1946-4cb1-88b8-b9afad062843\") " pod="calico-system/calico-node-82mnf" Dec 16 12:30:18.486228 kubelet[3427]: I1216 12:30:18.485944 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/773c6761-1946-4cb1-88b8-b9afad062843-flexvol-driver-host\") pod \"calico-node-82mnf\" (UID: \"773c6761-1946-4cb1-88b8-b9afad062843\") " pod="calico-system/calico-node-82mnf" Dec 16 12:30:18.486228 kubelet[3427]: I1216 12:30:18.485958 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/773c6761-1946-4cb1-88b8-b9afad062843-node-certs\") pod \"calico-node-82mnf\" (UID: \"773c6761-1946-4cb1-88b8-b9afad062843\") " pod="calico-system/calico-node-82mnf" Dec 16 12:30:18.486228 kubelet[3427]: I1216 12:30:18.485968 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/773c6761-1946-4cb1-88b8-b9afad062843-var-run-calico\") pod \"calico-node-82mnf\" (UID: \"773c6761-1946-4cb1-88b8-b9afad062843\") " pod="calico-system/calico-node-82mnf" Dec 16 12:30:18.486580 kubelet[3427]: I1216 12:30:18.485977 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/773c6761-1946-4cb1-88b8-b9afad062843-xtables-lock\") pod \"calico-node-82mnf\" (UID: \"773c6761-1946-4cb1-88b8-b9afad062843\") " pod="calico-system/calico-node-82mnf" Dec 16 12:30:18.486580 kubelet[3427]: I1216 12:30:18.485989 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/773c6761-1946-4cb1-88b8-b9afad062843-var-lib-calico\") pod \"calico-node-82mnf\" (UID: \"773c6761-1946-4cb1-88b8-b9afad062843\") " pod="calico-system/calico-node-82mnf" Dec 16 12:30:18.486580 kubelet[3427]: I1216 12:30:18.485998 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/773c6761-1946-4cb1-88b8-b9afad062843-cni-net-dir\") pod \"calico-node-82mnf\" (UID: \"773c6761-1946-4cb1-88b8-b9afad062843\") " pod="calico-system/calico-node-82mnf" Dec 16 12:30:18.486580 kubelet[3427]: I1216 12:30:18.486008 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/773c6761-1946-4cb1-88b8-b9afad062843-policysync\") pod \"calico-node-82mnf\" (UID: \"773c6761-1946-4cb1-88b8-b9afad062843\") " pod="calico-system/calico-node-82mnf" Dec 16 12:30:18.486580 kubelet[3427]: I1216 12:30:18.486019 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/773c6761-1946-4cb1-88b8-b9afad062843-cni-log-dir\") pod \"calico-node-82mnf\" (UID: \"773c6761-1946-4cb1-88b8-b9afad062843\") " pod="calico-system/calico-node-82mnf" Dec 16 12:30:18.486668 kubelet[3427]: I1216 12:30:18.486028 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/773c6761-1946-4cb1-88b8-b9afad062843-lib-modules\") pod \"calico-node-82mnf\" (UID: \"773c6761-1946-4cb1-88b8-b9afad062843\") " pod="calico-system/calico-node-82mnf" Dec 16 12:30:18.486668 kubelet[3427]: I1216 12:30:18.486040 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twsws\" (UniqueName: \"kubernetes.io/projected/773c6761-1946-4cb1-88b8-b9afad062843-kube-api-access-twsws\") pod \"calico-node-82mnf\" (UID: \"773c6761-1946-4cb1-88b8-b9afad062843\") " pod="calico-system/calico-node-82mnf" Dec 16 12:30:18.511916 containerd[1906]: time="2025-12-16T12:30:18.511578619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d488b7c8-ffg4b,Uid:c3b0e5f1-73f7-417d-8264-78e4886694a8,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:18.568736 containerd[1906]: time="2025-12-16T12:30:18.568200318Z" level=info msg="connecting to shim 8f111b50f64bcf90a6384bd24daa9806dac1e16418d573c40f6725fe2a653b3a" address="unix:///run/containerd/s/fa6c97b08a741e63473a801d117b2309825c76f07d5e213315f0c8c678af0c19" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:18.594458 kubelet[3427]: E1216 12:30:18.594432 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.594458 kubelet[3427]: W1216 12:30:18.594450 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.594599 kubelet[3427]: E1216 12:30:18.594481 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.600424 systemd[1]: Started cri-containerd-8f111b50f64bcf90a6384bd24daa9806dac1e16418d573c40f6725fe2a653b3a.scope - libcontainer container 8f111b50f64bcf90a6384bd24daa9806dac1e16418d573c40f6725fe2a653b3a. Dec 16 12:30:18.634063 containerd[1906]: time="2025-12-16T12:30:18.633993834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d488b7c8-ffg4b,Uid:c3b0e5f1-73f7-417d-8264-78e4886694a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f111b50f64bcf90a6384bd24daa9806dac1e16418d573c40f6725fe2a653b3a\"" Dec 16 12:30:18.636814 containerd[1906]: time="2025-12-16T12:30:18.636585089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 16 12:30:18.641904 kubelet[3427]: E1216 12:30:18.641866 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.641904 kubelet[3427]: W1216 12:30:18.641904 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.642009 kubelet[3427]: E1216 12:30:18.641923 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.660640 kubelet[3427]: E1216 12:30:18.660595 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:30:18.673443 kubelet[3427]: E1216 12:30:18.673414 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.673443 kubelet[3427]: W1216 12:30:18.673435 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.673588 kubelet[3427]: E1216 12:30:18.673463 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.673759 kubelet[3427]: E1216 12:30:18.673743 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.673801 kubelet[3427]: W1216 12:30:18.673762 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.673820 kubelet[3427]: E1216 12:30:18.673802 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.674186 kubelet[3427]: E1216 12:30:18.674121 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.674186 kubelet[3427]: W1216 12:30:18.674139 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.674186 kubelet[3427]: E1216 12:30:18.674164 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.674404 kubelet[3427]: E1216 12:30:18.674387 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.674404 kubelet[3427]: W1216 12:30:18.674399 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.674584 kubelet[3427]: E1216 12:30:18.674502 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.674879 kubelet[3427]: E1216 12:30:18.674863 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.675072 kubelet[3427]: W1216 12:30:18.674876 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.675072 kubelet[3427]: E1216 12:30:18.675071 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.675879 kubelet[3427]: E1216 12:30:18.675842 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.675879 kubelet[3427]: W1216 12:30:18.675862 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.675968 kubelet[3427]: E1216 12:30:18.675893 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.676359 kubelet[3427]: E1216 12:30:18.676335 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.676359 kubelet[3427]: W1216 12:30:18.676353 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.676441 kubelet[3427]: E1216 12:30:18.676365 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.676771 kubelet[3427]: E1216 12:30:18.676752 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.676771 kubelet[3427]: W1216 12:30:18.676766 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.676853 kubelet[3427]: E1216 12:30:18.676777 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.677894 kubelet[3427]: E1216 12:30:18.677704 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.677894 kubelet[3427]: W1216 12:30:18.677721 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.677894 kubelet[3427]: E1216 12:30:18.677733 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.678328 kubelet[3427]: E1216 12:30:18.678306 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.678328 kubelet[3427]: W1216 12:30:18.678320 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.678328 kubelet[3427]: E1216 12:30:18.678332 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.678874 kubelet[3427]: E1216 12:30:18.678851 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.678874 kubelet[3427]: W1216 12:30:18.678866 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.678874 kubelet[3427]: E1216 12:30:18.678878 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.679832 kubelet[3427]: E1216 12:30:18.679610 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.679832 kubelet[3427]: W1216 12:30:18.679636 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.679832 kubelet[3427]: E1216 12:30:18.679647 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.680058 kubelet[3427]: E1216 12:30:18.679984 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.680058 kubelet[3427]: W1216 12:30:18.679995 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.680058 kubelet[3427]: E1216 12:30:18.680005 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.680302 kubelet[3427]: E1216 12:30:18.680208 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.680302 kubelet[3427]: W1216 12:30:18.680223 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.680626 kubelet[3427]: E1216 12:30:18.680327 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.681387 kubelet[3427]: E1216 12:30:18.681280 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.681387 kubelet[3427]: W1216 12:30:18.681296 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.681627 kubelet[3427]: E1216 12:30:18.681604 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.681972 kubelet[3427]: E1216 12:30:18.681953 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.682079 kubelet[3427]: W1216 12:30:18.682061 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.682114 kubelet[3427]: E1216 12:30:18.682082 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.683301 kubelet[3427]: E1216 12:30:18.683268 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.683403 kubelet[3427]: W1216 12:30:18.683288 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.683434 kubelet[3427]: E1216 12:30:18.683409 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.683806 kubelet[3427]: E1216 12:30:18.683789 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.683806 kubelet[3427]: W1216 12:30:18.683802 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.683928 kubelet[3427]: E1216 12:30:18.683909 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.684136 kubelet[3427]: E1216 12:30:18.684120 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.684136 kubelet[3427]: W1216 12:30:18.684132 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.684258 kubelet[3427]: E1216 12:30:18.684197 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.684659 kubelet[3427]: E1216 12:30:18.684612 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.684659 kubelet[3427]: W1216 12:30:18.684641 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.684659 kubelet[3427]: E1216 12:30:18.684652 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.689540 kubelet[3427]: E1216 12:30:18.689514 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.689774 kubelet[3427]: W1216 12:30:18.689638 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.689774 kubelet[3427]: E1216 12:30:18.689658 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.689774 kubelet[3427]: I1216 12:30:18.689686 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5d133913-fb0a-455b-afab-c0825f0f11d8-socket-dir\") pod \"csi-node-driver-gmt2x\" (UID: \"5d133913-fb0a-455b-afab-c0825f0f11d8\") " pod="calico-system/csi-node-driver-gmt2x" Dec 16 12:30:18.689920 kubelet[3427]: E1216 12:30:18.689908 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.689974 kubelet[3427]: W1216 12:30:18.689963 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.690027 kubelet[3427]: E1216 12:30:18.690016 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.690082 kubelet[3427]: I1216 12:30:18.690073 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d133913-fb0a-455b-afab-c0825f0f11d8-kubelet-dir\") pod \"csi-node-driver-gmt2x\" (UID: \"5d133913-fb0a-455b-afab-c0825f0f11d8\") " pod="calico-system/csi-node-driver-gmt2x" Dec 16 12:30:18.690459 kubelet[3427]: E1216 12:30:18.690433 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.690459 kubelet[3427]: W1216 12:30:18.690450 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.690536 kubelet[3427]: E1216 12:30:18.690462 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.690823 kubelet[3427]: E1216 12:30:18.690807 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.690823 kubelet[3427]: W1216 12:30:18.690820 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.690884 kubelet[3427]: E1216 12:30:18.690831 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.691623 kubelet[3427]: E1216 12:30:18.691603 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.691623 kubelet[3427]: W1216 12:30:18.691617 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.691698 kubelet[3427]: E1216 12:30:18.691629 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.691971 kubelet[3427]: I1216 12:30:18.691936 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5d133913-fb0a-455b-afab-c0825f0f11d8-registration-dir\") pod \"csi-node-driver-gmt2x\" (UID: \"5d133913-fb0a-455b-afab-c0825f0f11d8\") " pod="calico-system/csi-node-driver-gmt2x" Dec 16 12:30:18.692239 kubelet[3427]: E1216 12:30:18.692218 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.692239 kubelet[3427]: W1216 12:30:18.692233 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.692450 kubelet[3427]: E1216 12:30:18.692245 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.692546 kubelet[3427]: I1216 12:30:18.692519 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5d133913-fb0a-455b-afab-c0825f0f11d8-varrun\") pod \"csi-node-driver-gmt2x\" (UID: \"5d133913-fb0a-455b-afab-c0825f0f11d8\") " pod="calico-system/csi-node-driver-gmt2x" Dec 16 12:30:18.692728 kubelet[3427]: E1216 12:30:18.692708 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.692728 kubelet[3427]: W1216 12:30:18.692725 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.692800 kubelet[3427]: E1216 12:30:18.692737 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.692945 kubelet[3427]: E1216 12:30:18.692930 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.692945 kubelet[3427]: W1216 12:30:18.692941 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.693040 kubelet[3427]: E1216 12:30:18.692950 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.693091 kubelet[3427]: E1216 12:30:18.693076 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.693091 kubelet[3427]: W1216 12:30:18.693086 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.693131 kubelet[3427]: E1216 12:30:18.693093 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.693131 kubelet[3427]: I1216 12:30:18.693114 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6lgb\" (UniqueName: \"kubernetes.io/projected/5d133913-fb0a-455b-afab-c0825f0f11d8-kube-api-access-w6lgb\") pod \"csi-node-driver-gmt2x\" (UID: \"5d133913-fb0a-455b-afab-c0825f0f11d8\") " pod="calico-system/csi-node-driver-gmt2x" Dec 16 12:30:18.693271 kubelet[3427]: E1216 12:30:18.693256 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.693271 kubelet[3427]: W1216 12:30:18.693267 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.693320 kubelet[3427]: E1216 12:30:18.693274 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.693397 kubelet[3427]: E1216 12:30:18.693384 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.693397 kubelet[3427]: W1216 12:30:18.693394 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.693435 kubelet[3427]: E1216 12:30:18.693400 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.693593 kubelet[3427]: E1216 12:30:18.693576 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.693593 kubelet[3427]: W1216 12:30:18.693589 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.693648 kubelet[3427]: E1216 12:30:18.693598 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.693766 kubelet[3427]: E1216 12:30:18.693753 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.693766 kubelet[3427]: W1216 12:30:18.693763 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.693816 kubelet[3427]: E1216 12:30:18.693770 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.693911 kubelet[3427]: E1216 12:30:18.693896 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.693911 kubelet[3427]: W1216 12:30:18.693906 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.694014 kubelet[3427]: E1216 12:30:18.693913 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.694083 kubelet[3427]: E1216 12:30:18.694069 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.694083 kubelet[3427]: W1216 12:30:18.694078 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.694116 kubelet[3427]: E1216 12:30:18.694085 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.740322 containerd[1906]: time="2025-12-16T12:30:18.740205441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-82mnf,Uid:773c6761-1946-4cb1-88b8-b9afad062843,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:18.786354 containerd[1906]: time="2025-12-16T12:30:18.786301678Z" level=info msg="connecting to shim ce91caa59ea6503e1527fd32e24243b814fa4df79f64a414e8e9c144811f8668" address="unix:///run/containerd/s/4675ea8074c6982982c7fd413997809fefea749b9f5472142853bfafc40a937f" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:18.795316 kubelet[3427]: E1216 12:30:18.795294 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.795600 kubelet[3427]: W1216 12:30:18.795444 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.795600 kubelet[3427]: E1216 12:30:18.795471 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.796054 kubelet[3427]: E1216 12:30:18.795991 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.796241 kubelet[3427]: W1216 12:30:18.796137 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.796499 kubelet[3427]: E1216 12:30:18.796415 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.796975 kubelet[3427]: E1216 12:30:18.796944 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.796975 kubelet[3427]: W1216 12:30:18.796956 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.797197 kubelet[3427]: E1216 12:30:18.797139 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.797884 kubelet[3427]: E1216 12:30:18.797866 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.797884 kubelet[3427]: W1216 12:30:18.797880 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.798056 kubelet[3427]: E1216 12:30:18.797893 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.798820 kubelet[3427]: E1216 12:30:18.798800 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.798820 kubelet[3427]: W1216 12:30:18.798815 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.798902 kubelet[3427]: E1216 12:30:18.798827 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.799413 kubelet[3427]: E1216 12:30:18.799388 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.799440 kubelet[3427]: W1216 12:30:18.799425 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.799470 kubelet[3427]: E1216 12:30:18.799438 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.799609 kubelet[3427]: E1216 12:30:18.799593 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.799609 kubelet[3427]: W1216 12:30:18.799604 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.799655 kubelet[3427]: E1216 12:30:18.799612 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.799744 kubelet[3427]: E1216 12:30:18.799728 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.799744 kubelet[3427]: W1216 12:30:18.799739 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.799777 kubelet[3427]: E1216 12:30:18.799746 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.799883 kubelet[3427]: E1216 12:30:18.799869 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.799883 kubelet[3427]: W1216 12:30:18.799879 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.799922 kubelet[3427]: E1216 12:30:18.799887 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.799989 kubelet[3427]: E1216 12:30:18.799974 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.799989 kubelet[3427]: W1216 12:30:18.799984 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.800020 kubelet[3427]: E1216 12:30:18.799991 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.800189 kubelet[3427]: E1216 12:30:18.800175 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.800189 kubelet[3427]: W1216 12:30:18.800186 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.800239 kubelet[3427]: E1216 12:30:18.800193 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.800349 kubelet[3427]: E1216 12:30:18.800335 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.800349 kubelet[3427]: W1216 12:30:18.800344 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.800394 kubelet[3427]: E1216 12:30:18.800351 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.800521 kubelet[3427]: E1216 12:30:18.800506 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.800521 kubelet[3427]: W1216 12:30:18.800516 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.800563 kubelet[3427]: E1216 12:30:18.800523 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.800650 kubelet[3427]: E1216 12:30:18.800635 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.800650 kubelet[3427]: W1216 12:30:18.800646 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.800685 kubelet[3427]: E1216 12:30:18.800652 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.800781 kubelet[3427]: E1216 12:30:18.800767 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.800781 kubelet[3427]: W1216 12:30:18.800777 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.800781 kubelet[3427]: E1216 12:30:18.800782 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.800937 kubelet[3427]: E1216 12:30:18.800923 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.800937 kubelet[3427]: W1216 12:30:18.800932 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.800937 kubelet[3427]: E1216 12:30:18.800938 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.801088 kubelet[3427]: E1216 12:30:18.801074 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.801088 kubelet[3427]: W1216 12:30:18.801084 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.801129 kubelet[3427]: E1216 12:30:18.801091 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.801350 kubelet[3427]: E1216 12:30:18.801333 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.801350 kubelet[3427]: W1216 12:30:18.801347 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.801403 kubelet[3427]: E1216 12:30:18.801355 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.801484 kubelet[3427]: E1216 12:30:18.801468 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.801484 kubelet[3427]: W1216 12:30:18.801479 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.801516 kubelet[3427]: E1216 12:30:18.801485 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.801613 kubelet[3427]: E1216 12:30:18.801598 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.801613 kubelet[3427]: W1216 12:30:18.801609 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.801661 kubelet[3427]: E1216 12:30:18.801615 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.801873 kubelet[3427]: E1216 12:30:18.801857 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.801873 kubelet[3427]: W1216 12:30:18.801868 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.801915 kubelet[3427]: E1216 12:30:18.801877 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.802026 kubelet[3427]: E1216 12:30:18.802011 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.802026 kubelet[3427]: W1216 12:30:18.802022 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.802073 kubelet[3427]: E1216 12:30:18.802028 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.802148 kubelet[3427]: E1216 12:30:18.802133 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.802148 kubelet[3427]: W1216 12:30:18.802144 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.802193 kubelet[3427]: E1216 12:30:18.802173 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.802305 kubelet[3427]: E1216 12:30:18.802291 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.802305 kubelet[3427]: W1216 12:30:18.802301 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.802352 kubelet[3427]: E1216 12:30:18.802307 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.802452 kubelet[3427]: E1216 12:30:18.802437 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.802452 kubelet[3427]: W1216 12:30:18.802446 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.802452 kubelet[3427]: E1216 12:30:18.802453 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.803917 systemd[1]: Started cri-containerd-ce91caa59ea6503e1527fd32e24243b814fa4df79f64a414e8e9c144811f8668.scope - libcontainer container ce91caa59ea6503e1527fd32e24243b814fa4df79f64a414e8e9c144811f8668. Dec 16 12:30:18.813664 kubelet[3427]: E1216 12:30:18.813625 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:18.813738 kubelet[3427]: W1216 12:30:18.813658 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:18.813738 kubelet[3427]: E1216 12:30:18.813717 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:18.831434 containerd[1906]: time="2025-12-16T12:30:18.831395823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-82mnf,Uid:773c6761-1946-4cb1-88b8-b9afad062843,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce91caa59ea6503e1527fd32e24243b814fa4df79f64a414e8e9c144811f8668\"" Dec 16 12:30:19.847333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount647054421.mount: Deactivated successfully. Dec 16 12:30:20.378505 containerd[1906]: time="2025-12-16T12:30:20.378456079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:20.382689 containerd[1906]: time="2025-12-16T12:30:20.382655617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Dec 16 12:30:20.385911 containerd[1906]: time="2025-12-16T12:30:20.385881249Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:20.390291 containerd[1906]: time="2025-12-16T12:30:20.390242023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:20.390792 containerd[1906]: time="2025-12-16T12:30:20.390469398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.753650262s" Dec 16 12:30:20.390792 containerd[1906]: time="2025-12-16T12:30:20.390499566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 16 12:30:20.392224 containerd[1906]: time="2025-12-16T12:30:20.391567419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 16 12:30:20.407505 containerd[1906]: time="2025-12-16T12:30:20.407477220Z" level=info msg="CreateContainer within sandbox \"8f111b50f64bcf90a6384bd24daa9806dac1e16418d573c40f6725fe2a653b3a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 16 12:30:20.432949 containerd[1906]: time="2025-12-16T12:30:20.432084817Z" level=info msg="Container 0a9f5c64cde747c0f750553d0d77d38d846e6fb214c84b812e5a349e1cc32468: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:20.451354 containerd[1906]: time="2025-12-16T12:30:20.451307755Z" level=info msg="CreateContainer within sandbox \"8f111b50f64bcf90a6384bd24daa9806dac1e16418d573c40f6725fe2a653b3a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0a9f5c64cde747c0f750553d0d77d38d846e6fb214c84b812e5a349e1cc32468\"" Dec 16 12:30:20.451962 containerd[1906]: time="2025-12-16T12:30:20.451934220Z" level=info msg="StartContainer for \"0a9f5c64cde747c0f750553d0d77d38d846e6fb214c84b812e5a349e1cc32468\"" Dec 16 12:30:20.452796 containerd[1906]: time="2025-12-16T12:30:20.452766219Z" level=info msg="connecting to shim 0a9f5c64cde747c0f750553d0d77d38d846e6fb214c84b812e5a349e1cc32468" address="unix:///run/containerd/s/fa6c97b08a741e63473a801d117b2309825c76f07d5e213315f0c8c678af0c19" protocol=ttrpc version=3 Dec 16 12:30:20.474366 kubelet[3427]: E1216 12:30:20.474327 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:30:20.476299 systemd[1]: Started cri-containerd-0a9f5c64cde747c0f750553d0d77d38d846e6fb214c84b812e5a349e1cc32468.scope - libcontainer container 0a9f5c64cde747c0f750553d0d77d38d846e6fb214c84b812e5a349e1cc32468. Dec 16 12:30:20.514583 containerd[1906]: time="2025-12-16T12:30:20.514536490Z" level=info msg="StartContainer for \"0a9f5c64cde747c0f750553d0d77d38d846e6fb214c84b812e5a349e1cc32468\" returns successfully" Dec 16 12:30:20.597168 kubelet[3427]: E1216 12:30:20.596996 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.597168 kubelet[3427]: W1216 12:30:20.597049 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.597168 kubelet[3427]: E1216 12:30:20.597069 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.597667 kubelet[3427]: E1216 12:30:20.597643 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.597909 kubelet[3427]: W1216 12:30:20.597812 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.598078 kubelet[3427]: E1216 12:30:20.598018 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.598883 kubelet[3427]: E1216 12:30:20.598815 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.598883 kubelet[3427]: W1216 12:30:20.598828 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.598883 kubelet[3427]: E1216 12:30:20.598839 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.599166 kubelet[3427]: E1216 12:30:20.599098 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.599166 kubelet[3427]: W1216 12:30:20.599108 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.599166 kubelet[3427]: E1216 12:30:20.599119 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.599948 kubelet[3427]: E1216 12:30:20.599897 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.599948 kubelet[3427]: W1216 12:30:20.599908 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.599948 kubelet[3427]: E1216 12:30:20.599919 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.600216 kubelet[3427]: E1216 12:30:20.600166 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.600216 kubelet[3427]: W1216 12:30:20.600177 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.600216 kubelet[3427]: E1216 12:30:20.600186 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.600459 kubelet[3427]: E1216 12:30:20.600410 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.600459 kubelet[3427]: W1216 12:30:20.600420 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.600459 kubelet[3427]: E1216 12:30:20.600429 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.600742 kubelet[3427]: E1216 12:30:20.600684 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.600742 kubelet[3427]: W1216 12:30:20.600696 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.600742 kubelet[3427]: E1216 12:30:20.600705 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.601560 kubelet[3427]: E1216 12:30:20.600929 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.601560 kubelet[3427]: W1216 12:30:20.600938 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.601560 kubelet[3427]: E1216 12:30:20.600946 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.601843 kubelet[3427]: E1216 12:30:20.601792 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.601843 kubelet[3427]: W1216 12:30:20.601803 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.601843 kubelet[3427]: E1216 12:30:20.601813 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.602080 kubelet[3427]: E1216 12:30:20.602033 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.602080 kubelet[3427]: W1216 12:30:20.602041 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.602080 kubelet[3427]: E1216 12:30:20.602050 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.602352 kubelet[3427]: E1216 12:30:20.602301 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.602352 kubelet[3427]: W1216 12:30:20.602311 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.602352 kubelet[3427]: E1216 12:30:20.602319 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.602595 kubelet[3427]: E1216 12:30:20.602542 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.602595 kubelet[3427]: W1216 12:30:20.602552 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.602595 kubelet[3427]: E1216 12:30:20.602563 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.603083 kubelet[3427]: E1216 12:30:20.603032 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.603083 kubelet[3427]: W1216 12:30:20.603044 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.603083 kubelet[3427]: E1216 12:30:20.603054 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.604320 kubelet[3427]: E1216 12:30:20.604240 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.604320 kubelet[3427]: W1216 12:30:20.604253 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.604320 kubelet[3427]: E1216 12:30:20.604263 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.612696 kubelet[3427]: E1216 12:30:20.612657 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.612696 kubelet[3427]: W1216 12:30:20.612670 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.612696 kubelet[3427]: E1216 12:30:20.612681 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.613023 kubelet[3427]: E1216 12:30:20.612995 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.613023 kubelet[3427]: W1216 12:30:20.613005 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.613023 kubelet[3427]: E1216 12:30:20.613014 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.614291 kubelet[3427]: E1216 12:30:20.614258 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.614291 kubelet[3427]: W1216 12:30:20.614270 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.614291 kubelet[3427]: E1216 12:30:20.614280 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.614591 kubelet[3427]: E1216 12:30:20.614579 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.614682 kubelet[3427]: W1216 12:30:20.614656 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.614682 kubelet[3427]: E1216 12:30:20.614671 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.614905 kubelet[3427]: E1216 12:30:20.614894 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.614979 kubelet[3427]: W1216 12:30:20.614956 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.614979 kubelet[3427]: E1216 12:30:20.614969 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.615193 kubelet[3427]: E1216 12:30:20.615181 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.615286 kubelet[3427]: W1216 12:30:20.615257 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.615286 kubelet[3427]: E1216 12:30:20.615271 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.616352 kubelet[3427]: E1216 12:30:20.616338 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.616453 kubelet[3427]: W1216 12:30:20.616423 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.616453 kubelet[3427]: E1216 12:30:20.616439 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.616644 kubelet[3427]: E1216 12:30:20.616636 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.616759 kubelet[3427]: W1216 12:30:20.616705 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.616759 kubelet[3427]: E1216 12:30:20.616720 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.617004 kubelet[3427]: E1216 12:30:20.616977 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.617004 kubelet[3427]: W1216 12:30:20.616987 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.617004 kubelet[3427]: E1216 12:30:20.616996 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.617262 kubelet[3427]: E1216 12:30:20.617237 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.617262 kubelet[3427]: W1216 12:30:20.617245 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.617262 kubelet[3427]: E1216 12:30:20.617253 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.617543 kubelet[3427]: E1216 12:30:20.617517 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.617543 kubelet[3427]: W1216 12:30:20.617526 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.617543 kubelet[3427]: E1216 12:30:20.617534 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.618636 kubelet[3427]: E1216 12:30:20.618544 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.618636 kubelet[3427]: W1216 12:30:20.618554 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.618636 kubelet[3427]: E1216 12:30:20.618563 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.618782 kubelet[3427]: E1216 12:30:20.618773 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.618844 kubelet[3427]: W1216 12:30:20.618836 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.618929 kubelet[3427]: E1216 12:30:20.618884 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.619277 kubelet[3427]: E1216 12:30:20.619266 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.619357 kubelet[3427]: W1216 12:30:20.619347 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.619414 kubelet[3427]: E1216 12:30:20.619400 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.619676 kubelet[3427]: E1216 12:30:20.619665 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.620143 kubelet[3427]: W1216 12:30:20.620125 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.620222 kubelet[3427]: E1216 12:30:20.620213 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.620440 kubelet[3427]: E1216 12:30:20.620430 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.620573 kubelet[3427]: W1216 12:30:20.620505 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.620573 kubelet[3427]: E1216 12:30:20.620518 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.622126 kubelet[3427]: E1216 12:30:20.622113 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.622457 kubelet[3427]: W1216 12:30:20.622202 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.622457 kubelet[3427]: E1216 12:30:20.622217 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:20.622656 kubelet[3427]: E1216 12:30:20.622646 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:20.622729 kubelet[3427]: W1216 12:30:20.622718 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:20.622777 kubelet[3427]: E1216 12:30:20.622767 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.571119 kubelet[3427]: I1216 12:30:21.571086 3427 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:30:21.584190 containerd[1906]: time="2025-12-16T12:30:21.584020237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:21.587591 containerd[1906]: time="2025-12-16T12:30:21.587542781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Dec 16 12:30:21.592532 containerd[1906]: time="2025-12-16T12:30:21.592490316Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:21.597243 containerd[1906]: time="2025-12-16T12:30:21.597191451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:21.597575 containerd[1906]: time="2025-12-16T12:30:21.597499100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.205906856s" Dec 16 12:30:21.597575 containerd[1906]: time="2025-12-16T12:30:21.597531053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 16 12:30:21.611916 kubelet[3427]: E1216 12:30:21.611890 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617076 kubelet[3427]: W1216 12:30:21.612002 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617076 kubelet[3427]: E1216 12:30:21.612026 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617076 kubelet[3427]: E1216 12:30:21.612192 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617076 kubelet[3427]: W1216 12:30:21.612201 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617076 kubelet[3427]: E1216 12:30:21.612208 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617076 kubelet[3427]: E1216 12:30:21.612336 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617076 kubelet[3427]: W1216 12:30:21.612345 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617076 kubelet[3427]: E1216 12:30:21.612352 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617076 kubelet[3427]: E1216 12:30:21.612475 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617076 kubelet[3427]: W1216 12:30:21.612482 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617289 kubelet[3427]: E1216 12:30:21.612488 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617289 kubelet[3427]: E1216 12:30:21.612613 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617289 kubelet[3427]: W1216 12:30:21.612620 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617289 kubelet[3427]: E1216 12:30:21.612627 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617289 kubelet[3427]: E1216 12:30:21.612731 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617289 kubelet[3427]: W1216 12:30:21.612736 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617289 kubelet[3427]: E1216 12:30:21.612742 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617289 kubelet[3427]: E1216 12:30:21.612851 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617289 kubelet[3427]: W1216 12:30:21.612856 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617289 kubelet[3427]: E1216 12:30:21.612862 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617442 kubelet[3427]: E1216 12:30:21.612964 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617442 kubelet[3427]: W1216 12:30:21.612969 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617442 kubelet[3427]: E1216 12:30:21.612974 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617442 kubelet[3427]: E1216 12:30:21.613091 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617442 kubelet[3427]: W1216 12:30:21.613096 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617442 kubelet[3427]: E1216 12:30:21.613101 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617442 kubelet[3427]: E1216 12:30:21.613212 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617442 kubelet[3427]: W1216 12:30:21.613218 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617442 kubelet[3427]: E1216 12:30:21.613224 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617442 kubelet[3427]: E1216 12:30:21.613329 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617581 kubelet[3427]: W1216 12:30:21.613334 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617581 kubelet[3427]: E1216 12:30:21.613339 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617581 kubelet[3427]: E1216 12:30:21.613438 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617581 kubelet[3427]: W1216 12:30:21.613443 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617581 kubelet[3427]: E1216 12:30:21.613449 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617581 kubelet[3427]: E1216 12:30:21.613565 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617581 kubelet[3427]: W1216 12:30:21.613570 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617581 kubelet[3427]: E1216 12:30:21.613576 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617581 kubelet[3427]: E1216 12:30:21.613674 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617581 kubelet[3427]: W1216 12:30:21.613680 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617719 kubelet[3427]: E1216 12:30:21.613685 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.617719 kubelet[3427]: E1216 12:30:21.613793 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.617719 kubelet[3427]: W1216 12:30:21.613799 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.617719 kubelet[3427]: E1216 12:30:21.613804 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.619622 containerd[1906]: time="2025-12-16T12:30:21.619576852Z" level=info msg="CreateContainer within sandbox \"ce91caa59ea6503e1527fd32e24243b814fa4df79f64a414e8e9c144811f8668\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 16 12:30:21.620809 kubelet[3427]: E1216 12:30:21.620790 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.620809 kubelet[3427]: W1216 12:30:21.620804 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.620809 kubelet[3427]: E1216 12:30:21.620816 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.621328 kubelet[3427]: E1216 12:30:21.620970 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.621328 kubelet[3427]: W1216 12:30:21.620976 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.621328 kubelet[3427]: E1216 12:30:21.620983 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.621607 kubelet[3427]: E1216 12:30:21.621439 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.621607 kubelet[3427]: W1216 12:30:21.621455 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.621607 kubelet[3427]: E1216 12:30:21.621467 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.622090 kubelet[3427]: E1216 12:30:21.621887 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.622090 kubelet[3427]: W1216 12:30:21.621985 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.622090 kubelet[3427]: E1216 12:30:21.622001 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.622414 kubelet[3427]: E1216 12:30:21.622397 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.622701 kubelet[3427]: W1216 12:30:21.622498 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.622701 kubelet[3427]: E1216 12:30:21.622516 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.622920 kubelet[3427]: E1216 12:30:21.622905 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.623163 kubelet[3427]: W1216 12:30:21.623065 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.623163 kubelet[3427]: E1216 12:30:21.623085 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.623784 kubelet[3427]: E1216 12:30:21.623612 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.623784 kubelet[3427]: W1216 12:30:21.623626 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.623784 kubelet[3427]: E1216 12:30:21.623637 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.624198 kubelet[3427]: E1216 12:30:21.624181 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.624359 kubelet[3427]: W1216 12:30:21.624251 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.624359 kubelet[3427]: E1216 12:30:21.624267 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.624726 kubelet[3427]: E1216 12:30:21.624698 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.624893 kubelet[3427]: W1216 12:30:21.624711 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.624893 kubelet[3427]: E1216 12:30:21.624800 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.625062 kubelet[3427]: E1216 12:30:21.625035 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.625179 kubelet[3427]: W1216 12:30:21.625047 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.625179 kubelet[3427]: E1216 12:30:21.625124 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.625544 kubelet[3427]: E1216 12:30:21.625483 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.625544 kubelet[3427]: W1216 12:30:21.625496 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.625544 kubelet[3427]: E1216 12:30:21.625506 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.625776 kubelet[3427]: E1216 12:30:21.625705 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.625776 kubelet[3427]: W1216 12:30:21.625715 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.625776 kubelet[3427]: E1216 12:30:21.625725 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.625910 kubelet[3427]: E1216 12:30:21.625825 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.625910 kubelet[3427]: W1216 12:30:21.625831 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.625910 kubelet[3427]: E1216 12:30:21.625837 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.625988 kubelet[3427]: E1216 12:30:21.625955 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.625988 kubelet[3427]: W1216 12:30:21.625960 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.625988 kubelet[3427]: E1216 12:30:21.625966 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.626421 kubelet[3427]: E1216 12:30:21.626347 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.626421 kubelet[3427]: W1216 12:30:21.626363 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.626421 kubelet[3427]: E1216 12:30:21.626374 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.626733 kubelet[3427]: E1216 12:30:21.626706 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.626733 kubelet[3427]: W1216 12:30:21.626718 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.626938 kubelet[3427]: E1216 12:30:21.626811 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.627033 kubelet[3427]: E1216 12:30:21.627015 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.627033 kubelet[3427]: W1216 12:30:21.627027 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.627124 kubelet[3427]: E1216 12:30:21.627036 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.627200 kubelet[3427]: E1216 12:30:21.627167 3427 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 16 12:30:21.627200 kubelet[3427]: W1216 12:30:21.627173 3427 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 16 12:30:21.627200 kubelet[3427]: E1216 12:30:21.627180 3427 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 16 12:30:21.645855 containerd[1906]: time="2025-12-16T12:30:21.645760355Z" level=info msg="Container 8e0e191e33374ceefe0a6f14250499da4e22442f383c0eb28ae988aab30bd6b6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:21.666617 containerd[1906]: time="2025-12-16T12:30:21.666577729Z" level=info msg="CreateContainer within sandbox \"ce91caa59ea6503e1527fd32e24243b814fa4df79f64a414e8e9c144811f8668\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8e0e191e33374ceefe0a6f14250499da4e22442f383c0eb28ae988aab30bd6b6\"" Dec 16 12:30:21.667969 containerd[1906]: time="2025-12-16T12:30:21.667938918Z" level=info msg="StartContainer for \"8e0e191e33374ceefe0a6f14250499da4e22442f383c0eb28ae988aab30bd6b6\"" Dec 16 12:30:21.669544 containerd[1906]: time="2025-12-16T12:30:21.669494153Z" level=info msg="connecting to shim 8e0e191e33374ceefe0a6f14250499da4e22442f383c0eb28ae988aab30bd6b6" address="unix:///run/containerd/s/4675ea8074c6982982c7fd413997809fefea749b9f5472142853bfafc40a937f" protocol=ttrpc version=3 Dec 16 12:30:21.690302 systemd[1]: Started cri-containerd-8e0e191e33374ceefe0a6f14250499da4e22442f383c0eb28ae988aab30bd6b6.scope - libcontainer container 8e0e191e33374ceefe0a6f14250499da4e22442f383c0eb28ae988aab30bd6b6. Dec 16 12:30:21.742532 containerd[1906]: time="2025-12-16T12:30:21.742460968Z" level=info msg="StartContainer for \"8e0e191e33374ceefe0a6f14250499da4e22442f383c0eb28ae988aab30bd6b6\" returns successfully" Dec 16 12:30:21.748762 systemd[1]: cri-containerd-8e0e191e33374ceefe0a6f14250499da4e22442f383c0eb28ae988aab30bd6b6.scope: Deactivated successfully. Dec 16 12:30:21.753161 containerd[1906]: time="2025-12-16T12:30:21.752429719Z" level=info msg="received container exit event container_id:\"8e0e191e33374ceefe0a6f14250499da4e22442f383c0eb28ae988aab30bd6b6\" id:\"8e0e191e33374ceefe0a6f14250499da4e22442f383c0eb28ae988aab30bd6b6\" pid:4134 exited_at:{seconds:1765888221 nanos:750738033}" Dec 16 12:30:21.776372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e0e191e33374ceefe0a6f14250499da4e22442f383c0eb28ae988aab30bd6b6-rootfs.mount: Deactivated successfully. Dec 16 12:30:22.474551 kubelet[3427]: E1216 12:30:22.474195 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:30:22.595804 kubelet[3427]: I1216 12:30:22.595233 3427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d488b7c8-ffg4b" podStartSLOduration=2.839796426 podStartE2EDuration="4.595215881s" podCreationTimestamp="2025-12-16 12:30:18 +0000 UTC" firstStartedPulling="2025-12-16 12:30:18.63604989 +0000 UTC m=+20.257307455" lastFinishedPulling="2025-12-16 12:30:20.391469345 +0000 UTC m=+22.012726910" observedRunningTime="2025-12-16 12:30:20.593859269 +0000 UTC m=+22.215116866" watchObservedRunningTime="2025-12-16 12:30:22.595215881 +0000 UTC m=+24.216473446" Dec 16 12:30:23.049450 kubelet[3427]: I1216 12:30:23.049345 3427 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 16 12:30:23.579752 containerd[1906]: time="2025-12-16T12:30:23.579685662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 16 12:30:24.476292 kubelet[3427]: E1216 12:30:24.476251 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:30:25.881008 containerd[1906]: time="2025-12-16T12:30:25.880492299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:25.883156 containerd[1906]: time="2025-12-16T12:30:25.883122954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 16 12:30:25.886317 containerd[1906]: time="2025-12-16T12:30:25.886291743Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:25.890552 containerd[1906]: time="2025-12-16T12:30:25.890522992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:25.890875 containerd[1906]: time="2025-12-16T12:30:25.890847008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.311124282s" Dec 16 12:30:25.890875 containerd[1906]: time="2025-12-16T12:30:25.890871409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 16 12:30:25.898194 containerd[1906]: time="2025-12-16T12:30:25.898145612Z" level=info msg="CreateContainer within sandbox \"ce91caa59ea6503e1527fd32e24243b814fa4df79f64a414e8e9c144811f8668\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 12:30:25.922461 containerd[1906]: time="2025-12-16T12:30:25.921614312Z" level=info msg="Container c1655f07b26eb128517f6c02b8527db33b9e40b62aba11e3a7ce69fb5a51923a: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:25.943896 containerd[1906]: time="2025-12-16T12:30:25.943770729Z" level=info msg="CreateContainer within sandbox \"ce91caa59ea6503e1527fd32e24243b814fa4df79f64a414e8e9c144811f8668\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c1655f07b26eb128517f6c02b8527db33b9e40b62aba11e3a7ce69fb5a51923a\"" Dec 16 12:30:25.946178 containerd[1906]: time="2025-12-16T12:30:25.945546360Z" level=info msg="StartContainer for \"c1655f07b26eb128517f6c02b8527db33b9e40b62aba11e3a7ce69fb5a51923a\"" Dec 16 12:30:25.948440 containerd[1906]: time="2025-12-16T12:30:25.948264889Z" level=info msg="connecting to shim c1655f07b26eb128517f6c02b8527db33b9e40b62aba11e3a7ce69fb5a51923a" address="unix:///run/containerd/s/4675ea8074c6982982c7fd413997809fefea749b9f5472142853bfafc40a937f" protocol=ttrpc version=3 Dec 16 12:30:25.973326 systemd[1]: Started cri-containerd-c1655f07b26eb128517f6c02b8527db33b9e40b62aba11e3a7ce69fb5a51923a.scope - libcontainer container c1655f07b26eb128517f6c02b8527db33b9e40b62aba11e3a7ce69fb5a51923a. Dec 16 12:30:26.029817 containerd[1906]: time="2025-12-16T12:30:26.029774927Z" level=info msg="StartContainer for \"c1655f07b26eb128517f6c02b8527db33b9e40b62aba11e3a7ce69fb5a51923a\" returns successfully" Dec 16 12:30:26.476323 kubelet[3427]: E1216 12:30:26.476287 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:30:27.451103 containerd[1906]: time="2025-12-16T12:30:27.451036898Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 12:30:27.453126 systemd[1]: cri-containerd-c1655f07b26eb128517f6c02b8527db33b9e40b62aba11e3a7ce69fb5a51923a.scope: Deactivated successfully. Dec 16 12:30:27.453725 systemd[1]: cri-containerd-c1655f07b26eb128517f6c02b8527db33b9e40b62aba11e3a7ce69fb5a51923a.scope: Consumed 331ms CPU time, 186.8M memory peak, 165.9M written to disk. Dec 16 12:30:27.455807 containerd[1906]: time="2025-12-16T12:30:27.455770713Z" level=info msg="received container exit event container_id:\"c1655f07b26eb128517f6c02b8527db33b9e40b62aba11e3a7ce69fb5a51923a\" id:\"c1655f07b26eb128517f6c02b8527db33b9e40b62aba11e3a7ce69fb5a51923a\" pid:4193 exited_at:{seconds:1765888227 nanos:455546323}" Dec 16 12:30:27.475900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1655f07b26eb128517f6c02b8527db33b9e40b62aba11e3a7ce69fb5a51923a-rootfs.mount: Deactivated successfully. Dec 16 12:30:27.536415 kubelet[3427]: I1216 12:30:27.536390 3427 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 12:30:28.066677 systemd[1]: Created slice kubepods-burstable-pod5bd00760_7369_4a4e_8d38_12395d49edf0.slice - libcontainer container kubepods-burstable-pod5bd00760_7369_4a4e_8d38_12395d49edf0.slice. Dec 16 12:30:28.076467 systemd[1]: Created slice kubepods-burstable-pod22fc970d_0749_440b_91ca_bc8521b4622e.slice - libcontainer container kubepods-burstable-pod22fc970d_0749_440b_91ca_bc8521b4622e.slice. Dec 16 12:30:28.090797 systemd[1]: Created slice kubepods-besteffort-podd50bef5a_6902_49f4_92df_f935afcbb9ff.slice - libcontainer container kubepods-besteffort-podd50bef5a_6902_49f4_92df_f935afcbb9ff.slice. Dec 16 12:30:28.102138 systemd[1]: Created slice kubepods-besteffort-pod1831bbdd_6642_47ec_b6ce_03f07d23d2da.slice - libcontainer container kubepods-besteffort-pod1831bbdd_6642_47ec_b6ce_03f07d23d2da.slice. Dec 16 12:30:28.109488 systemd[1]: Created slice kubepods-besteffort-pod042d90c0_42d8_409c_add9_7c678aa9ba3e.slice - libcontainer container kubepods-besteffort-pod042d90c0_42d8_409c_add9_7c678aa9ba3e.slice. Dec 16 12:30:28.115989 systemd[1]: Created slice kubepods-besteffort-pod55a76304_4aea_4f15_bc1b_68bedd920d78.slice - libcontainer container kubepods-besteffort-pod55a76304_4aea_4f15_bc1b_68bedd920d78.slice. Dec 16 12:30:28.122401 systemd[1]: Created slice kubepods-besteffort-podb54c9e2a_c2a6_4080_b5bd_3a3aeebd49cb.slice - libcontainer container kubepods-besteffort-podb54c9e2a_c2a6_4080_b5bd_3a3aeebd49cb.slice. Dec 16 12:30:28.161601 kubelet[3427]: I1216 12:30:28.161519 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1831bbdd-6642-47ec-b6ce-03f07d23d2da-calico-apiserver-certs\") pod \"calico-apiserver-587446bcc5-b8k9n\" (UID: \"1831bbdd-6642-47ec-b6ce-03f07d23d2da\") " pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" Dec 16 12:30:28.161601 kubelet[3427]: I1216 12:30:28.161558 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bd00760-7369-4a4e-8d38-12395d49edf0-config-volume\") pod \"coredns-674b8bbfcf-vbkjn\" (UID: \"5bd00760-7369-4a4e-8d38-12395d49edf0\") " pod="kube-system/coredns-674b8bbfcf-vbkjn" Dec 16 12:30:28.161601 kubelet[3427]: I1216 12:30:28.161571 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnksh\" (UniqueName: \"kubernetes.io/projected/22fc970d-0749-440b-91ca-bc8521b4622e-kube-api-access-xnksh\") pod \"coredns-674b8bbfcf-r9s4g\" (UID: \"22fc970d-0749-440b-91ca-bc8521b4622e\") " pod="kube-system/coredns-674b8bbfcf-r9s4g" Dec 16 12:30:28.161601 kubelet[3427]: I1216 12:30:28.161591 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrffh\" (UniqueName: \"kubernetes.io/projected/1831bbdd-6642-47ec-b6ce-03f07d23d2da-kube-api-access-zrffh\") pod \"calico-apiserver-587446bcc5-b8k9n\" (UID: \"1831bbdd-6642-47ec-b6ce-03f07d23d2da\") " pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" Dec 16 12:30:28.161601 kubelet[3427]: I1216 12:30:28.161604 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/042d90c0-42d8-409c-add9-7c678aa9ba3e-calico-apiserver-certs\") pod \"calico-apiserver-587446bcc5-m2zzh\" (UID: \"042d90c0-42d8-409c-add9-7c678aa9ba3e\") " pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" Dec 16 12:30:28.161817 kubelet[3427]: I1216 12:30:28.161615 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb-config\") pod \"goldmane-666569f655-dcszz\" (UID: \"b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb\") " pod="calico-system/goldmane-666569f655-dcszz" Dec 16 12:30:28.161817 kubelet[3427]: I1216 12:30:28.161624 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv688\" (UniqueName: \"kubernetes.io/projected/b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb-kube-api-access-wv688\") pod \"goldmane-666569f655-dcszz\" (UID: \"b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb\") " pod="calico-system/goldmane-666569f655-dcszz" Dec 16 12:30:28.161817 kubelet[3427]: I1216 12:30:28.161636 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2c5j\" (UniqueName: \"kubernetes.io/projected/d50bef5a-6902-49f4-92df-f935afcbb9ff-kube-api-access-m2c5j\") pod \"calico-kube-controllers-cd89997db-lnfvm\" (UID: \"d50bef5a-6902-49f4-92df-f935afcbb9ff\") " pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" Dec 16 12:30:28.161817 kubelet[3427]: I1216 12:30:28.161646 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55a76304-4aea-4f15-bc1b-68bedd920d78-whisker-ca-bundle\") pod \"whisker-57f6cb77d6-ws6zk\" (UID: \"55a76304-4aea-4f15-bc1b-68bedd920d78\") " pod="calico-system/whisker-57f6cb77d6-ws6zk" Dec 16 12:30:28.161817 kubelet[3427]: I1216 12:30:28.161658 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22fc970d-0749-440b-91ca-bc8521b4622e-config-volume\") pod \"coredns-674b8bbfcf-r9s4g\" (UID: \"22fc970d-0749-440b-91ca-bc8521b4622e\") " pod="kube-system/coredns-674b8bbfcf-r9s4g" Dec 16 12:30:28.161896 kubelet[3427]: I1216 12:30:28.161669 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqnqh\" (UniqueName: \"kubernetes.io/projected/042d90c0-42d8-409c-add9-7c678aa9ba3e-kube-api-access-xqnqh\") pod \"calico-apiserver-587446bcc5-m2zzh\" (UID: \"042d90c0-42d8-409c-add9-7c678aa9ba3e\") " pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" Dec 16 12:30:28.161896 kubelet[3427]: I1216 12:30:28.161688 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb-goldmane-ca-bundle\") pod \"goldmane-666569f655-dcszz\" (UID: \"b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb\") " pod="calico-system/goldmane-666569f655-dcszz" Dec 16 12:30:28.161896 kubelet[3427]: I1216 12:30:28.161696 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb-goldmane-key-pair\") pod \"goldmane-666569f655-dcszz\" (UID: \"b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb\") " pod="calico-system/goldmane-666569f655-dcszz" Dec 16 12:30:28.161896 kubelet[3427]: I1216 12:30:28.161705 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d50bef5a-6902-49f4-92df-f935afcbb9ff-tigera-ca-bundle\") pod \"calico-kube-controllers-cd89997db-lnfvm\" (UID: \"d50bef5a-6902-49f4-92df-f935afcbb9ff\") " pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" Dec 16 12:30:28.161896 kubelet[3427]: I1216 12:30:28.161715 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/55a76304-4aea-4f15-bc1b-68bedd920d78-whisker-backend-key-pair\") pod \"whisker-57f6cb77d6-ws6zk\" (UID: \"55a76304-4aea-4f15-bc1b-68bedd920d78\") " pod="calico-system/whisker-57f6cb77d6-ws6zk" Dec 16 12:30:28.161975 kubelet[3427]: I1216 12:30:28.161735 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c6vw\" (UniqueName: \"kubernetes.io/projected/5bd00760-7369-4a4e-8d38-12395d49edf0-kube-api-access-7c6vw\") pod \"coredns-674b8bbfcf-vbkjn\" (UID: \"5bd00760-7369-4a4e-8d38-12395d49edf0\") " pod="kube-system/coredns-674b8bbfcf-vbkjn" Dec 16 12:30:28.161975 kubelet[3427]: I1216 12:30:28.161745 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsb2d\" (UniqueName: \"kubernetes.io/projected/55a76304-4aea-4f15-bc1b-68bedd920d78-kube-api-access-nsb2d\") pod \"whisker-57f6cb77d6-ws6zk\" (UID: \"55a76304-4aea-4f15-bc1b-68bedd920d78\") " pod="calico-system/whisker-57f6cb77d6-ws6zk" Dec 16 12:30:28.373111 containerd[1906]: time="2025-12-16T12:30:28.373063453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vbkjn,Uid:5bd00760-7369-4a4e-8d38-12395d49edf0,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:28.381894 containerd[1906]: time="2025-12-16T12:30:28.381663739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9s4g,Uid:22fc970d-0749-440b-91ca-bc8521b4622e,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:28.400626 containerd[1906]: time="2025-12-16T12:30:28.400529580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd89997db-lnfvm,Uid:d50bef5a-6902-49f4-92df-f935afcbb9ff,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:28.408072 containerd[1906]: time="2025-12-16T12:30:28.407931906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587446bcc5-b8k9n,Uid:1831bbdd-6642-47ec-b6ce-03f07d23d2da,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:30:28.415105 containerd[1906]: time="2025-12-16T12:30:28.415038000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587446bcc5-m2zzh,Uid:042d90c0-42d8-409c-add9-7c678aa9ba3e,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:30:28.421298 containerd[1906]: time="2025-12-16T12:30:28.421205061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57f6cb77d6-ws6zk,Uid:55a76304-4aea-4f15-bc1b-68bedd920d78,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:28.425662 containerd[1906]: time="2025-12-16T12:30:28.425631268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dcszz,Uid:b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:28.429891 containerd[1906]: time="2025-12-16T12:30:28.429849917Z" level=error msg="Failed to destroy network for sandbox \"9e8585b590d76d6fffd1a426b6189606ad2a4fb5b05072da403b9b7333e2c964\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.448550 containerd[1906]: time="2025-12-16T12:30:28.448437542Z" level=error msg="Failed to destroy network for sandbox \"945d8badddc86436d42de54dcf8bd0eb42c355520a4f01894f76ab388c8dca6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.459868 containerd[1906]: time="2025-12-16T12:30:28.459785830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vbkjn,Uid:5bd00760-7369-4a4e-8d38-12395d49edf0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8585b590d76d6fffd1a426b6189606ad2a4fb5b05072da403b9b7333e2c964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.460439 kubelet[3427]: E1216 12:30:28.460300 3427 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8585b590d76d6fffd1a426b6189606ad2a4fb5b05072da403b9b7333e2c964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.460439 kubelet[3427]: E1216 12:30:28.460373 3427 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8585b590d76d6fffd1a426b6189606ad2a4fb5b05072da403b9b7333e2c964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vbkjn" Dec 16 12:30:28.460439 kubelet[3427]: E1216 12:30:28.460391 3427 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8585b590d76d6fffd1a426b6189606ad2a4fb5b05072da403b9b7333e2c964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vbkjn" Dec 16 12:30:28.460521 kubelet[3427]: E1216 12:30:28.460440 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vbkjn_kube-system(5bd00760-7369-4a4e-8d38-12395d49edf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vbkjn_kube-system(5bd00760-7369-4a4e-8d38-12395d49edf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e8585b590d76d6fffd1a426b6189606ad2a4fb5b05072da403b9b7333e2c964\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vbkjn" podUID="5bd00760-7369-4a4e-8d38-12395d49edf0" Dec 16 12:30:28.469560 containerd[1906]: time="2025-12-16T12:30:28.469338806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9s4g,Uid:22fc970d-0749-440b-91ca-bc8521b4622e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"945d8badddc86436d42de54dcf8bd0eb42c355520a4f01894f76ab388c8dca6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.470175 kubelet[3427]: E1216 12:30:28.469886 3427 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"945d8badddc86436d42de54dcf8bd0eb42c355520a4f01894f76ab388c8dca6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.470175 kubelet[3427]: E1216 12:30:28.469928 3427 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"945d8badddc86436d42de54dcf8bd0eb42c355520a4f01894f76ab388c8dca6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r9s4g" Dec 16 12:30:28.470175 kubelet[3427]: E1216 12:30:28.469945 3427 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"945d8badddc86436d42de54dcf8bd0eb42c355520a4f01894f76ab388c8dca6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r9s4g" Dec 16 12:30:28.470297 kubelet[3427]: E1216 12:30:28.469988 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-r9s4g_kube-system(22fc970d-0749-440b-91ca-bc8521b4622e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-r9s4g_kube-system(22fc970d-0749-440b-91ca-bc8521b4622e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"945d8badddc86436d42de54dcf8bd0eb42c355520a4f01894f76ab388c8dca6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-r9s4g" podUID="22fc970d-0749-440b-91ca-bc8521b4622e" Dec 16 12:30:28.494794 systemd[1]: Created slice kubepods-besteffort-pod5d133913_fb0a_455b_afab_c0825f0f11d8.slice - libcontainer container kubepods-besteffort-pod5d133913_fb0a_455b_afab_c0825f0f11d8.slice. Dec 16 12:30:28.496939 containerd[1906]: time="2025-12-16T12:30:28.496852486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gmt2x,Uid:5d133913-fb0a-455b-afab-c0825f0f11d8,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:28.522331 containerd[1906]: time="2025-12-16T12:30:28.522284254Z" level=error msg="Failed to destroy network for sandbox \"ff70263e4de88bba9396f79978fa096ef29a920a1ef8d0ac46f37245ab3669d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.525238 systemd[1]: run-netns-cni\x2d4a3974ef\x2dff95\x2dfb9e\x2d2435\x2d1087a9b0f933.mount: Deactivated successfully. Dec 16 12:30:28.534109 containerd[1906]: time="2025-12-16T12:30:28.534007456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd89997db-lnfvm,Uid:d50bef5a-6902-49f4-92df-f935afcbb9ff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff70263e4de88bba9396f79978fa096ef29a920a1ef8d0ac46f37245ab3669d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.535033 kubelet[3427]: E1216 12:30:28.534899 3427 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff70263e4de88bba9396f79978fa096ef29a920a1ef8d0ac46f37245ab3669d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.535033 kubelet[3427]: E1216 12:30:28.534961 3427 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff70263e4de88bba9396f79978fa096ef29a920a1ef8d0ac46f37245ab3669d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" Dec 16 12:30:28.535033 kubelet[3427]: E1216 12:30:28.534985 3427 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff70263e4de88bba9396f79978fa096ef29a920a1ef8d0ac46f37245ab3669d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" Dec 16 12:30:28.535167 kubelet[3427]: E1216 12:30:28.535029 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cd89997db-lnfvm_calico-system(d50bef5a-6902-49f4-92df-f935afcbb9ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cd89997db-lnfvm_calico-system(d50bef5a-6902-49f4-92df-f935afcbb9ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff70263e4de88bba9396f79978fa096ef29a920a1ef8d0ac46f37245ab3669d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" podUID="d50bef5a-6902-49f4-92df-f935afcbb9ff" Dec 16 12:30:28.561372 containerd[1906]: time="2025-12-16T12:30:28.561247249Z" level=error msg="Failed to destroy network for sandbox \"3b507733c9a15797f111ba2b73157a3465abf9cc6fae53840617e6525177dcd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.564798 systemd[1]: run-netns-cni\x2d2b7b685d\x2ddfc1\x2de6d9\x2df606\x2d137f166dc989.mount: Deactivated successfully. Dec 16 12:30:28.569886 containerd[1906]: time="2025-12-16T12:30:28.569664890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587446bcc5-b8k9n,Uid:1831bbdd-6642-47ec-b6ce-03f07d23d2da,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b507733c9a15797f111ba2b73157a3465abf9cc6fae53840617e6525177dcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.570965 kubelet[3427]: E1216 12:30:28.570417 3427 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b507733c9a15797f111ba2b73157a3465abf9cc6fae53840617e6525177dcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.570965 kubelet[3427]: E1216 12:30:28.570468 3427 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b507733c9a15797f111ba2b73157a3465abf9cc6fae53840617e6525177dcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" Dec 16 12:30:28.570965 kubelet[3427]: E1216 12:30:28.570483 3427 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b507733c9a15797f111ba2b73157a3465abf9cc6fae53840617e6525177dcd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" Dec 16 12:30:28.571258 kubelet[3427]: E1216 12:30:28.570519 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-587446bcc5-b8k9n_calico-apiserver(1831bbdd-6642-47ec-b6ce-03f07d23d2da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-587446bcc5-b8k9n_calico-apiserver(1831bbdd-6642-47ec-b6ce-03f07d23d2da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b507733c9a15797f111ba2b73157a3465abf9cc6fae53840617e6525177dcd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" podUID="1831bbdd-6642-47ec-b6ce-03f07d23d2da" Dec 16 12:30:28.593132 containerd[1906]: time="2025-12-16T12:30:28.592393355Z" level=error msg="Failed to destroy network for sandbox \"64a45e07c5e2ea0d29de2ad2e936bfa42d351b4f311e8999e6a58ce2af914c69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.596641 systemd[1]: run-netns-cni\x2d6cb2ea63\x2deee1\x2de6da\x2dacd6\x2d2ad193c7ca63.mount: Deactivated successfully. Dec 16 12:30:28.600435 containerd[1906]: time="2025-12-16T12:30:28.600394121Z" level=error msg="Failed to destroy network for sandbox \"bf19e549ae0ad3846a783c4b23d483bc6b96117b747ada5fb7dcd4f1c5af9917\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.605111 containerd[1906]: time="2025-12-16T12:30:28.604862432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 16 12:30:28.605821 containerd[1906]: time="2025-12-16T12:30:28.605685302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587446bcc5-m2zzh,Uid:042d90c0-42d8-409c-add9-7c678aa9ba3e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64a45e07c5e2ea0d29de2ad2e936bfa42d351b4f311e8999e6a58ce2af914c69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.606174 kubelet[3427]: E1216 12:30:28.606132 3427 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64a45e07c5e2ea0d29de2ad2e936bfa42d351b4f311e8999e6a58ce2af914c69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.606363 kubelet[3427]: E1216 12:30:28.606259 3427 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64a45e07c5e2ea0d29de2ad2e936bfa42d351b4f311e8999e6a58ce2af914c69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" Dec 16 12:30:28.606363 kubelet[3427]: E1216 12:30:28.606280 3427 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64a45e07c5e2ea0d29de2ad2e936bfa42d351b4f311e8999e6a58ce2af914c69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" Dec 16 12:30:28.606363 kubelet[3427]: E1216 12:30:28.606326 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-587446bcc5-m2zzh_calico-apiserver(042d90c0-42d8-409c-add9-7c678aa9ba3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-587446bcc5-m2zzh_calico-apiserver(042d90c0-42d8-409c-add9-7c678aa9ba3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64a45e07c5e2ea0d29de2ad2e936bfa42d351b4f311e8999e6a58ce2af914c69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" podUID="042d90c0-42d8-409c-add9-7c678aa9ba3e" Dec 16 12:30:28.607030 containerd[1906]: time="2025-12-16T12:30:28.606992745Z" level=error msg="Failed to destroy network for sandbox \"622954341f851352beb04f02c2cf519c1ade65199eb30f8db6e95da083273178\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.609490 containerd[1906]: time="2025-12-16T12:30:28.609443819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dcszz,Uid:b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf19e549ae0ad3846a783c4b23d483bc6b96117b747ada5fb7dcd4f1c5af9917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.609959 kubelet[3427]: E1216 12:30:28.609933 3427 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf19e549ae0ad3846a783c4b23d483bc6b96117b747ada5fb7dcd4f1c5af9917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.610188 kubelet[3427]: E1216 12:30:28.610080 3427 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf19e549ae0ad3846a783c4b23d483bc6b96117b747ada5fb7dcd4f1c5af9917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dcszz" Dec 16 12:30:28.610188 kubelet[3427]: E1216 12:30:28.610100 3427 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf19e549ae0ad3846a783c4b23d483bc6b96117b747ada5fb7dcd4f1c5af9917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dcszz" Dec 16 12:30:28.610368 kubelet[3427]: E1216 12:30:28.610274 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-dcszz_calico-system(b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-dcszz_calico-system(b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf19e549ae0ad3846a783c4b23d483bc6b96117b747ada5fb7dcd4f1c5af9917\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dcszz" podUID="b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb" Dec 16 12:30:28.618652 containerd[1906]: time="2025-12-16T12:30:28.618613872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57f6cb77d6-ws6zk,Uid:55a76304-4aea-4f15-bc1b-68bedd920d78,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"622954341f851352beb04f02c2cf519c1ade65199eb30f8db6e95da083273178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.619176 kubelet[3427]: E1216 12:30:28.619056 3427 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"622954341f851352beb04f02c2cf519c1ade65199eb30f8db6e95da083273178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.619176 kubelet[3427]: E1216 12:30:28.619110 3427 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"622954341f851352beb04f02c2cf519c1ade65199eb30f8db6e95da083273178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57f6cb77d6-ws6zk" Dec 16 12:30:28.619176 kubelet[3427]: E1216 12:30:28.619128 3427 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"622954341f851352beb04f02c2cf519c1ade65199eb30f8db6e95da083273178\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57f6cb77d6-ws6zk" Dec 16 12:30:28.619510 kubelet[3427]: E1216 12:30:28.619473 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57f6cb77d6-ws6zk_calico-system(55a76304-4aea-4f15-bc1b-68bedd920d78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57f6cb77d6-ws6zk_calico-system(55a76304-4aea-4f15-bc1b-68bedd920d78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"622954341f851352beb04f02c2cf519c1ade65199eb30f8db6e95da083273178\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57f6cb77d6-ws6zk" podUID="55a76304-4aea-4f15-bc1b-68bedd920d78" Dec 16 12:30:28.624896 containerd[1906]: time="2025-12-16T12:30:28.624464245Z" level=error msg="Failed to destroy network for sandbox \"651afd0fbc22b956065c9c6510550aa37d251674046f42d059153542ab53cf19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.628838 containerd[1906]: time="2025-12-16T12:30:28.628793025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gmt2x,Uid:5d133913-fb0a-455b-afab-c0825f0f11d8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"651afd0fbc22b956065c9c6510550aa37d251674046f42d059153542ab53cf19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.629770 kubelet[3427]: E1216 12:30:28.629741 3427 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651afd0fbc22b956065c9c6510550aa37d251674046f42d059153542ab53cf19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 16 12:30:28.630008 kubelet[3427]: E1216 12:30:28.629872 3427 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651afd0fbc22b956065c9c6510550aa37d251674046f42d059153542ab53cf19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gmt2x" Dec 16 12:30:28.630008 kubelet[3427]: E1216 12:30:28.629891 3427 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"651afd0fbc22b956065c9c6510550aa37d251674046f42d059153542ab53cf19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gmt2x" Dec 16 12:30:28.630237 kubelet[3427]: E1216 12:30:28.630205 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gmt2x_calico-system(5d133913-fb0a-455b-afab-c0825f0f11d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gmt2x_calico-system(5d133913-fb0a-455b-afab-c0825f0f11d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"651afd0fbc22b956065c9c6510550aa37d251674046f42d059153542ab53cf19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:30:29.477454 systemd[1]: run-netns-cni\x2dc5176141\x2d27c3\x2d0401\x2dfdd4\x2d60a0a7b58b76.mount: Deactivated successfully. Dec 16 12:30:29.477566 systemd[1]: run-netns-cni\x2df2bf21b7\x2de0f5\x2d41db\x2da4f3\x2d71cf00f074ce.mount: Deactivated successfully. Dec 16 12:30:29.477602 systemd[1]: run-netns-cni\x2d7b64c483\x2d0a3f\x2da415\x2d7a44\x2d9c5a01e3aec3.mount: Deactivated successfully. Dec 16 12:30:32.393583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2980783746.mount: Deactivated successfully. Dec 16 12:30:33.025196 containerd[1906]: time="2025-12-16T12:30:33.025116727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:33.029336 containerd[1906]: time="2025-12-16T12:30:33.029283271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 16 12:30:33.032621 containerd[1906]: time="2025-12-16T12:30:33.032568751Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:33.036749 containerd[1906]: time="2025-12-16T12:30:33.036699254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 12:30:33.037261 containerd[1906]: time="2025-12-16T12:30:33.036958692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.432060003s" Dec 16 12:30:33.037261 containerd[1906]: time="2025-12-16T12:30:33.036990445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 16 12:30:33.059295 containerd[1906]: time="2025-12-16T12:30:33.059251361Z" level=info msg="CreateContainer within sandbox \"ce91caa59ea6503e1527fd32e24243b814fa4df79f64a414e8e9c144811f8668\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 16 12:30:33.088136 containerd[1906]: time="2025-12-16T12:30:33.086293477Z" level=info msg="Container bda32f6bd0dc80d083d1f06671e3ce19faab7c1beb6a5aedca5347fb85f4e6d6: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:33.090910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount311242026.mount: Deactivated successfully. Dec 16 12:30:33.110143 containerd[1906]: time="2025-12-16T12:30:33.110093106Z" level=info msg="CreateContainer within sandbox \"ce91caa59ea6503e1527fd32e24243b814fa4df79f64a414e8e9c144811f8668\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bda32f6bd0dc80d083d1f06671e3ce19faab7c1beb6a5aedca5347fb85f4e6d6\"" Dec 16 12:30:33.110796 containerd[1906]: time="2025-12-16T12:30:33.110750835Z" level=info msg="StartContainer for \"bda32f6bd0dc80d083d1f06671e3ce19faab7c1beb6a5aedca5347fb85f4e6d6\"" Dec 16 12:30:33.113214 containerd[1906]: time="2025-12-16T12:30:33.113076273Z" level=info msg="connecting to shim bda32f6bd0dc80d083d1f06671e3ce19faab7c1beb6a5aedca5347fb85f4e6d6" address="unix:///run/containerd/s/4675ea8074c6982982c7fd413997809fefea749b9f5472142853bfafc40a937f" protocol=ttrpc version=3 Dec 16 12:30:33.131329 systemd[1]: Started cri-containerd-bda32f6bd0dc80d083d1f06671e3ce19faab7c1beb6a5aedca5347fb85f4e6d6.scope - libcontainer container bda32f6bd0dc80d083d1f06671e3ce19faab7c1beb6a5aedca5347fb85f4e6d6. Dec 16 12:30:33.223140 containerd[1906]: time="2025-12-16T12:30:33.223031576Z" level=info msg="StartContainer for \"bda32f6bd0dc80d083d1f06671e3ce19faab7c1beb6a5aedca5347fb85f4e6d6\" returns successfully" Dec 16 12:30:33.640556 kubelet[3427]: I1216 12:30:33.640493 3427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-82mnf" podStartSLOduration=1.435574323 podStartE2EDuration="15.640477278s" podCreationTimestamp="2025-12-16 12:30:18 +0000 UTC" firstStartedPulling="2025-12-16 12:30:18.832970626 +0000 UTC m=+20.454228191" lastFinishedPulling="2025-12-16 12:30:33.037873573 +0000 UTC m=+34.659131146" observedRunningTime="2025-12-16 12:30:33.639138971 +0000 UTC m=+35.260396536" watchObservedRunningTime="2025-12-16 12:30:33.640477278 +0000 UTC m=+35.261734843" Dec 16 12:30:33.936908 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 16 12:30:33.937048 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 16 12:30:34.101878 kubelet[3427]: I1216 12:30:34.101838 3427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55a76304-4aea-4f15-bc1b-68bedd920d78-whisker-ca-bundle\") pod \"55a76304-4aea-4f15-bc1b-68bedd920d78\" (UID: \"55a76304-4aea-4f15-bc1b-68bedd920d78\") " Dec 16 12:30:34.102031 kubelet[3427]: I1216 12:30:34.101915 3427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsb2d\" (UniqueName: \"kubernetes.io/projected/55a76304-4aea-4f15-bc1b-68bedd920d78-kube-api-access-nsb2d\") pod \"55a76304-4aea-4f15-bc1b-68bedd920d78\" (UID: \"55a76304-4aea-4f15-bc1b-68bedd920d78\") " Dec 16 12:30:34.102031 kubelet[3427]: I1216 12:30:34.101965 3427 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/55a76304-4aea-4f15-bc1b-68bedd920d78-whisker-backend-key-pair\") pod \"55a76304-4aea-4f15-bc1b-68bedd920d78\" (UID: \"55a76304-4aea-4f15-bc1b-68bedd920d78\") " Dec 16 12:30:34.102577 kubelet[3427]: I1216 12:30:34.102369 3427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55a76304-4aea-4f15-bc1b-68bedd920d78-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "55a76304-4aea-4f15-bc1b-68bedd920d78" (UID: "55a76304-4aea-4f15-bc1b-68bedd920d78"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 16 12:30:34.107052 systemd[1]: var-lib-kubelet-pods-55a76304\x2d4aea\x2d4f15\x2dbc1b\x2d68bedd920d78-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnsb2d.mount: Deactivated successfully. Dec 16 12:30:34.108325 systemd[1]: var-lib-kubelet-pods-55a76304\x2d4aea\x2d4f15\x2dbc1b\x2d68bedd920d78-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 16 12:30:34.109976 kubelet[3427]: I1216 12:30:34.109917 3427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55a76304-4aea-4f15-bc1b-68bedd920d78-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "55a76304-4aea-4f15-bc1b-68bedd920d78" (UID: "55a76304-4aea-4f15-bc1b-68bedd920d78"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 16 12:30:34.111864 kubelet[3427]: I1216 12:30:34.111816 3427 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55a76304-4aea-4f15-bc1b-68bedd920d78-kube-api-access-nsb2d" (OuterVolumeSpecName: "kube-api-access-nsb2d") pod "55a76304-4aea-4f15-bc1b-68bedd920d78" (UID: "55a76304-4aea-4f15-bc1b-68bedd920d78"). InnerVolumeSpecName "kube-api-access-nsb2d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 16 12:30:34.202901 kubelet[3427]: I1216 12:30:34.202834 3427 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nsb2d\" (UniqueName: \"kubernetes.io/projected/55a76304-4aea-4f15-bc1b-68bedd920d78-kube-api-access-nsb2d\") on node \"ci-4459.2.2-a-e780e4b687\" DevicePath \"\"" Dec 16 12:30:34.202901 kubelet[3427]: I1216 12:30:34.202868 3427 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/55a76304-4aea-4f15-bc1b-68bedd920d78-whisker-backend-key-pair\") on node \"ci-4459.2.2-a-e780e4b687\" DevicePath \"\"" Dec 16 12:30:34.202901 kubelet[3427]: I1216 12:30:34.202877 3427 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55a76304-4aea-4f15-bc1b-68bedd920d78-whisker-ca-bundle\") on node \"ci-4459.2.2-a-e780e4b687\" DevicePath \"\"" Dec 16 12:30:34.484756 systemd[1]: Removed slice kubepods-besteffort-pod55a76304_4aea_4f15_bc1b_68bedd920d78.slice - libcontainer container kubepods-besteffort-pod55a76304_4aea_4f15_bc1b_68bedd920d78.slice. Dec 16 12:30:34.708157 systemd[1]: Created slice kubepods-besteffort-pod05c07d81_bd4b_4334_984c_fd96a5a648fa.slice - libcontainer container kubepods-besteffort-pod05c07d81_bd4b_4334_984c_fd96a5a648fa.slice. Dec 16 12:30:34.806144 kubelet[3427]: I1216 12:30:34.805992 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/05c07d81-bd4b-4334-984c-fd96a5a648fa-whisker-backend-key-pair\") pod \"whisker-65489c7c88-dxnd9\" (UID: \"05c07d81-bd4b-4334-984c-fd96a5a648fa\") " pod="calico-system/whisker-65489c7c88-dxnd9" Dec 16 12:30:34.806144 kubelet[3427]: I1216 12:30:34.806036 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05c07d81-bd4b-4334-984c-fd96a5a648fa-whisker-ca-bundle\") pod \"whisker-65489c7c88-dxnd9\" (UID: \"05c07d81-bd4b-4334-984c-fd96a5a648fa\") " pod="calico-system/whisker-65489c7c88-dxnd9" Dec 16 12:30:34.806144 kubelet[3427]: I1216 12:30:34.806197 3427 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4pzv\" (UniqueName: \"kubernetes.io/projected/05c07d81-bd4b-4334-984c-fd96a5a648fa-kube-api-access-t4pzv\") pod \"whisker-65489c7c88-dxnd9\" (UID: \"05c07d81-bd4b-4334-984c-fd96a5a648fa\") " pod="calico-system/whisker-65489c7c88-dxnd9" Dec 16 12:30:35.013659 containerd[1906]: time="2025-12-16T12:30:35.013623171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65489c7c88-dxnd9,Uid:05c07d81-bd4b-4334-984c-fd96a5a648fa,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:35.129473 systemd-networkd[1469]: cali5b3d2866fe0: Link UP Dec 16 12:30:35.130626 systemd-networkd[1469]: cali5b3d2866fe0: Gained carrier Dec 16 12:30:35.148924 containerd[1906]: 2025-12-16 12:30:35.035 [INFO][4565] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 16 12:30:35.148924 containerd[1906]: 2025-12-16 12:30:35.069 [INFO][4565] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-eth0 whisker-65489c7c88- calico-system 05c07d81-bd4b-4334-984c-fd96a5a648fa 899 0 2025-12-16 12:30:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65489c7c88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.2-a-e780e4b687 whisker-65489c7c88-dxnd9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5b3d2866fe0 [] [] }} ContainerID="cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" Namespace="calico-system" Pod="whisker-65489c7c88-dxnd9" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-" Dec 16 12:30:35.148924 containerd[1906]: 2025-12-16 12:30:35.069 [INFO][4565] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" Namespace="calico-system" Pod="whisker-65489c7c88-dxnd9" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-eth0" Dec 16 12:30:35.148924 containerd[1906]: 2025-12-16 12:30:35.087 [INFO][4577] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" HandleID="k8s-pod-network.cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" Workload="ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-eth0" Dec 16 12:30:35.149340 containerd[1906]: 2025-12-16 12:30:35.088 [INFO][4577] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" HandleID="k8s-pod-network.cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" Workload="ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-e780e4b687", "pod":"whisker-65489c7c88-dxnd9", "timestamp":"2025-12-16 12:30:35.087994375 +0000 UTC"}, Hostname:"ci-4459.2.2-a-e780e4b687", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:35.149340 containerd[1906]: 2025-12-16 12:30:35.088 [INFO][4577] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:35.149340 containerd[1906]: 2025-12-16 12:30:35.088 [INFO][4577] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:35.149340 containerd[1906]: 2025-12-16 12:30:35.088 [INFO][4577] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-e780e4b687' Dec 16 12:30:35.149340 containerd[1906]: 2025-12-16 12:30:35.095 [INFO][4577] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:35.149340 containerd[1906]: 2025-12-16 12:30:35.098 [INFO][4577] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:35.149340 containerd[1906]: 2025-12-16 12:30:35.101 [INFO][4577] ipam/ipam.go 511: Trying affinity for 192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:35.149340 containerd[1906]: 2025-12-16 12:30:35.103 [INFO][4577] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:35.149340 containerd[1906]: 2025-12-16 12:30:35.104 [INFO][4577] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:35.149495 containerd[1906]: 2025-12-16 12:30:35.104 [INFO][4577] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.128/26 handle="k8s-pod-network.cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:35.149495 containerd[1906]: 2025-12-16 12:30:35.105 [INFO][4577] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7 Dec 16 12:30:35.149495 containerd[1906]: 2025-12-16 12:30:35.109 [INFO][4577] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.128/26 handle="k8s-pod-network.cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:35.149495 containerd[1906]: 2025-12-16 12:30:35.118 [INFO][4577] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.129/26] block=192.168.7.128/26 handle="k8s-pod-network.cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:35.149495 containerd[1906]: 2025-12-16 12:30:35.118 [INFO][4577] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.129/26] handle="k8s-pod-network.cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:35.149495 containerd[1906]: 2025-12-16 12:30:35.118 [INFO][4577] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:35.149495 containerd[1906]: 2025-12-16 12:30:35.118 [INFO][4577] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.129/26] IPv6=[] ContainerID="cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" HandleID="k8s-pod-network.cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" Workload="ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-eth0" Dec 16 12:30:35.149589 containerd[1906]: 2025-12-16 12:30:35.121 [INFO][4565] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" Namespace="calico-system" Pod="whisker-65489c7c88-dxnd9" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-eth0", GenerateName:"whisker-65489c7c88-", Namespace:"calico-system", SelfLink:"", UID:"05c07d81-bd4b-4334-984c-fd96a5a648fa", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65489c7c88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"", Pod:"whisker-65489c7c88-dxnd9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.7.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5b3d2866fe0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:35.149589 containerd[1906]: 2025-12-16 12:30:35.121 [INFO][4565] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.129/32] ContainerID="cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" Namespace="calico-system" Pod="whisker-65489c7c88-dxnd9" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-eth0" Dec 16 12:30:35.149635 containerd[1906]: 2025-12-16 12:30:35.121 [INFO][4565] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b3d2866fe0 ContainerID="cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" Namespace="calico-system" Pod="whisker-65489c7c88-dxnd9" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-eth0" Dec 16 12:30:35.149635 containerd[1906]: 2025-12-16 12:30:35.131 [INFO][4565] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" Namespace="calico-system" Pod="whisker-65489c7c88-dxnd9" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-eth0" Dec 16 12:30:35.149665 containerd[1906]: 2025-12-16 12:30:35.131 [INFO][4565] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" Namespace="calico-system" Pod="whisker-65489c7c88-dxnd9" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-eth0", GenerateName:"whisker-65489c7c88-", Namespace:"calico-system", SelfLink:"", UID:"05c07d81-bd4b-4334-984c-fd96a5a648fa", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65489c7c88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7", Pod:"whisker-65489c7c88-dxnd9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.7.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5b3d2866fe0", MAC:"82:67:9f:cf:03:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:35.149697 containerd[1906]: 2025-12-16 12:30:35.147 [INFO][4565] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" Namespace="calico-system" Pod="whisker-65489c7c88-dxnd9" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-whisker--65489c7c88--dxnd9-eth0" Dec 16 12:30:35.188066 containerd[1906]: time="2025-12-16T12:30:35.187992180Z" level=info msg="connecting to shim cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7" address="unix:///run/containerd/s/cd8efa419c8ecfddc92f8e2530f3665a57f64ef4cca4a40ca40380dfabdb1ce7" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:35.212385 systemd[1]: Started cri-containerd-cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7.scope - libcontainer container cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7. Dec 16 12:30:35.307047 containerd[1906]: time="2025-12-16T12:30:35.306924238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65489c7c88-dxnd9,Uid:05c07d81-bd4b-4334-984c-fd96a5a648fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc893b0e59ccd517f9ee63abae69ba0595513be8897092d724ee8afd4ddc43f7\"" Dec 16 12:30:35.309398 containerd[1906]: time="2025-12-16T12:30:35.309362663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:30:35.579168 containerd[1906]: time="2025-12-16T12:30:35.579113609Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:35.584399 containerd[1906]: time="2025-12-16T12:30:35.584280082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:30:35.584399 containerd[1906]: time="2025-12-16T12:30:35.584336524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:30:35.586471 kubelet[3427]: E1216 12:30:35.586423 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:30:35.586535 kubelet[3427]: E1216 12:30:35.586491 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:30:35.594089 kubelet[3427]: E1216 12:30:35.594015 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d4d5681624114e43967285ec59e0b032,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t4pzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65489c7c88-dxnd9_calico-system(05c07d81-bd4b-4334-984c-fd96a5a648fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:35.596143 containerd[1906]: time="2025-12-16T12:30:35.596113420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:30:35.729637 systemd-networkd[1469]: vxlan.calico: Link UP Dec 16 12:30:35.729642 systemd-networkd[1469]: vxlan.calico: Gained carrier Dec 16 12:30:35.921868 containerd[1906]: time="2025-12-16T12:30:35.921739505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:35.925810 containerd[1906]: time="2025-12-16T12:30:35.925759667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:30:35.925973 containerd[1906]: time="2025-12-16T12:30:35.925780556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:30:35.926011 kubelet[3427]: E1216 12:30:35.925976 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:30:35.926289 kubelet[3427]: E1216 12:30:35.926022 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:30:35.926312 kubelet[3427]: E1216 12:30:35.926132 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4pzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65489c7c88-dxnd9_calico-system(05c07d81-bd4b-4334-984c-fd96a5a648fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:35.927433 kubelet[3427]: E1216 12:30:35.927366 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65489c7c88-dxnd9" podUID="05c07d81-bd4b-4334-984c-fd96a5a648fa" Dec 16 12:30:36.477128 kubelet[3427]: I1216 12:30:36.476970 3427 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55a76304-4aea-4f15-bc1b-68bedd920d78" path="/var/lib/kubelet/pods/55a76304-4aea-4f15-bc1b-68bedd920d78/volumes" Dec 16 12:30:36.626861 kubelet[3427]: E1216 12:30:36.626696 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65489c7c88-dxnd9" podUID="05c07d81-bd4b-4334-984c-fd96a5a648fa" Dec 16 12:30:36.775286 systemd-networkd[1469]: cali5b3d2866fe0: Gained IPv6LL Dec 16 12:30:37.415425 systemd-networkd[1469]: vxlan.calico: Gained IPv6LL Dec 16 12:30:39.475860 containerd[1906]: time="2025-12-16T12:30:39.475558713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587446bcc5-b8k9n,Uid:1831bbdd-6642-47ec-b6ce-03f07d23d2da,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:30:39.476255 containerd[1906]: time="2025-12-16T12:30:39.476089143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dcszz,Uid:b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:39.476287 containerd[1906]: time="2025-12-16T12:30:39.476275516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd89997db-lnfvm,Uid:d50bef5a-6902-49f4-92df-f935afcbb9ff,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:39.646389 systemd-networkd[1469]: cali74083f93f4b: Link UP Dec 16 12:30:39.648655 systemd-networkd[1469]: cali74083f93f4b: Gained carrier Dec 16 12:30:39.670221 containerd[1906]: 2025-12-16 12:30:39.536 [INFO][4834] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-eth0 calico-apiserver-587446bcc5- calico-apiserver 1831bbdd-6642-47ec-b6ce-03f07d23d2da 823 0 2025-12-16 12:30:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:587446bcc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-a-e780e4b687 calico-apiserver-587446bcc5-b8k9n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali74083f93f4b [] [] }} ContainerID="4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-b8k9n" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-" Dec 16 12:30:39.670221 containerd[1906]: 2025-12-16 12:30:39.536 [INFO][4834] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-b8k9n" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-eth0" Dec 16 12:30:39.670221 containerd[1906]: 2025-12-16 12:30:39.569 [INFO][4870] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" HandleID="k8s-pod-network.4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" Workload="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-eth0" Dec 16 12:30:39.670400 containerd[1906]: 2025-12-16 12:30:39.570 [INFO][4870] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" HandleID="k8s-pod-network.4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" Workload="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3910), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-a-e780e4b687", "pod":"calico-apiserver-587446bcc5-b8k9n", "timestamp":"2025-12-16 12:30:39.569380569 +0000 UTC"}, Hostname:"ci-4459.2.2-a-e780e4b687", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:39.670400 containerd[1906]: 2025-12-16 12:30:39.570 [INFO][4870] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:39.670400 containerd[1906]: 2025-12-16 12:30:39.571 [INFO][4870] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:39.670400 containerd[1906]: 2025-12-16 12:30:39.571 [INFO][4870] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-e780e4b687' Dec 16 12:30:39.670400 containerd[1906]: 2025-12-16 12:30:39.584 [INFO][4870] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.670400 containerd[1906]: 2025-12-16 12:30:39.594 [INFO][4870] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.670400 containerd[1906]: 2025-12-16 12:30:39.602 [INFO][4870] ipam/ipam.go 511: Trying affinity for 192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.670400 containerd[1906]: 2025-12-16 12:30:39.605 [INFO][4870] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.670400 containerd[1906]: 2025-12-16 12:30:39.611 [INFO][4870] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.670910 containerd[1906]: 2025-12-16 12:30:39.612 [INFO][4870] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.128/26 handle="k8s-pod-network.4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.670910 containerd[1906]: 2025-12-16 12:30:39.613 [INFO][4870] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd Dec 16 12:30:39.670910 containerd[1906]: 2025-12-16 12:30:39.621 [INFO][4870] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.128/26 handle="k8s-pod-network.4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.670910 containerd[1906]: 2025-12-16 12:30:39.631 [INFO][4870] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.130/26] block=192.168.7.128/26 handle="k8s-pod-network.4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.670910 containerd[1906]: 2025-12-16 12:30:39.631 [INFO][4870] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.130/26] handle="k8s-pod-network.4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.670910 containerd[1906]: 2025-12-16 12:30:39.631 [INFO][4870] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:39.670910 containerd[1906]: 2025-12-16 12:30:39.631 [INFO][4870] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.130/26] IPv6=[] ContainerID="4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" HandleID="k8s-pod-network.4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" Workload="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-eth0" Dec 16 12:30:39.671237 containerd[1906]: 2025-12-16 12:30:39.634 [INFO][4834] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-b8k9n" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-eth0", GenerateName:"calico-apiserver-587446bcc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1831bbdd-6642-47ec-b6ce-03f07d23d2da", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587446bcc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"", Pod:"calico-apiserver-587446bcc5-b8k9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74083f93f4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:39.671287 containerd[1906]: 2025-12-16 12:30:39.634 [INFO][4834] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.130/32] ContainerID="4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-b8k9n" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-eth0" Dec 16 12:30:39.671287 containerd[1906]: 2025-12-16 12:30:39.634 [INFO][4834] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74083f93f4b ContainerID="4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-b8k9n" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-eth0" Dec 16 12:30:39.671287 containerd[1906]: 2025-12-16 12:30:39.650 [INFO][4834] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-b8k9n" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-eth0" Dec 16 12:30:39.671328 containerd[1906]: 2025-12-16 12:30:39.650 [INFO][4834] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-b8k9n" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-eth0", GenerateName:"calico-apiserver-587446bcc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1831bbdd-6642-47ec-b6ce-03f07d23d2da", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587446bcc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd", Pod:"calico-apiserver-587446bcc5-b8k9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali74083f93f4b", MAC:"e6:d9:ab:ab:59:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:39.671363 containerd[1906]: 2025-12-16 12:30:39.666 [INFO][4834] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-b8k9n" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--b8k9n-eth0" Dec 16 12:30:39.731629 systemd-networkd[1469]: cali286450e826b: Link UP Dec 16 12:30:39.732806 systemd-networkd[1469]: cali286450e826b: Gained carrier Dec 16 12:30:39.742233 containerd[1906]: time="2025-12-16T12:30:39.742137080Z" level=info msg="connecting to shim 4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd" address="unix:///run/containerd/s/63124a1899c7fe500a08266755e77f9365229fae5cb08daa0e0e894455f35f17" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:39.757219 containerd[1906]: 2025-12-16 12:30:39.569 [INFO][4854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-eth0 calico-kube-controllers-cd89997db- calico-system d50bef5a-6902-49f4-92df-f935afcbb9ff 822 0 2025-12-16 12:30:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cd89997db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.2-a-e780e4b687 calico-kube-controllers-cd89997db-lnfvm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali286450e826b [] [] }} ContainerID="b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" Namespace="calico-system" Pod="calico-kube-controllers-cd89997db-lnfvm" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-" Dec 16 12:30:39.757219 containerd[1906]: 2025-12-16 12:30:39.575 [INFO][4854] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" Namespace="calico-system" Pod="calico-kube-controllers-cd89997db-lnfvm" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-eth0" Dec 16 12:30:39.757219 containerd[1906]: 2025-12-16 12:30:39.608 [INFO][4884] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" HandleID="k8s-pod-network.b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" Workload="ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-eth0" Dec 16 12:30:39.757387 containerd[1906]: 2025-12-16 12:30:39.609 [INFO][4884] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" HandleID="k8s-pod-network.b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" Workload="ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-e780e4b687", "pod":"calico-kube-controllers-cd89997db-lnfvm", "timestamp":"2025-12-16 12:30:39.608887457 +0000 UTC"}, Hostname:"ci-4459.2.2-a-e780e4b687", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:39.757387 containerd[1906]: 2025-12-16 12:30:39.609 [INFO][4884] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:39.757387 containerd[1906]: 2025-12-16 12:30:39.631 [INFO][4884] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:39.757387 containerd[1906]: 2025-12-16 12:30:39.631 [INFO][4884] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-e780e4b687' Dec 16 12:30:39.757387 containerd[1906]: 2025-12-16 12:30:39.684 [INFO][4884] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.757387 containerd[1906]: 2025-12-16 12:30:39.696 [INFO][4884] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.757387 containerd[1906]: 2025-12-16 12:30:39.700 [INFO][4884] ipam/ipam.go 511: Trying affinity for 192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.757387 containerd[1906]: 2025-12-16 12:30:39.702 [INFO][4884] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.757387 containerd[1906]: 2025-12-16 12:30:39.703 [INFO][4884] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.757528 containerd[1906]: 2025-12-16 12:30:39.703 [INFO][4884] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.128/26 handle="k8s-pod-network.b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.757528 containerd[1906]: 2025-12-16 12:30:39.705 [INFO][4884] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2 Dec 16 12:30:39.757528 containerd[1906]: 2025-12-16 12:30:39.712 [INFO][4884] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.128/26 handle="k8s-pod-network.b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.757528 containerd[1906]: 2025-12-16 12:30:39.719 [INFO][4884] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.131/26] block=192.168.7.128/26 handle="k8s-pod-network.b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.757528 containerd[1906]: 2025-12-16 12:30:39.719 [INFO][4884] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.131/26] handle="k8s-pod-network.b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.757528 containerd[1906]: 2025-12-16 12:30:39.719 [INFO][4884] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:39.757528 containerd[1906]: 2025-12-16 12:30:39.719 [INFO][4884] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.131/26] IPv6=[] ContainerID="b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" HandleID="k8s-pod-network.b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" Workload="ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-eth0" Dec 16 12:30:39.757667 containerd[1906]: 2025-12-16 12:30:39.725 [INFO][4854] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" Namespace="calico-system" Pod="calico-kube-controllers-cd89997db-lnfvm" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-eth0", GenerateName:"calico-kube-controllers-cd89997db-", Namespace:"calico-system", SelfLink:"", UID:"d50bef5a-6902-49f4-92df-f935afcbb9ff", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd89997db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"", Pod:"calico-kube-controllers-cd89997db-lnfvm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali286450e826b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:39.757703 containerd[1906]: 2025-12-16 12:30:39.727 [INFO][4854] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.131/32] ContainerID="b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" Namespace="calico-system" Pod="calico-kube-controllers-cd89997db-lnfvm" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-eth0" Dec 16 12:30:39.757703 containerd[1906]: 2025-12-16 12:30:39.727 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali286450e826b ContainerID="b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" Namespace="calico-system" Pod="calico-kube-controllers-cd89997db-lnfvm" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-eth0" Dec 16 12:30:39.757703 containerd[1906]: 2025-12-16 12:30:39.734 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" Namespace="calico-system" Pod="calico-kube-controllers-cd89997db-lnfvm" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-eth0" Dec 16 12:30:39.757744 containerd[1906]: 2025-12-16 12:30:39.735 [INFO][4854] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" Namespace="calico-system" Pod="calico-kube-controllers-cd89997db-lnfvm" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-eth0", GenerateName:"calico-kube-controllers-cd89997db-", Namespace:"calico-system", SelfLink:"", UID:"d50bef5a-6902-49f4-92df-f935afcbb9ff", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cd89997db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2", Pod:"calico-kube-controllers-cd89997db-lnfvm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.7.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali286450e826b", MAC:"92:2d:7b:a6:41:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:39.758418 containerd[1906]: 2025-12-16 12:30:39.751 [INFO][4854] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" Namespace="calico-system" Pod="calico-kube-controllers-cd89997db-lnfvm" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--kube--controllers--cd89997db--lnfvm-eth0" Dec 16 12:30:39.777334 systemd[1]: Started cri-containerd-4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd.scope - libcontainer container 4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd. Dec 16 12:30:39.804108 containerd[1906]: time="2025-12-16T12:30:39.804043945Z" level=info msg="connecting to shim b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2" address="unix:///run/containerd/s/3cf44b964ffef8ef29e6a234629cd9e75a8483d86b5718425745446822d35802" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:39.835660 containerd[1906]: time="2025-12-16T12:30:39.835611663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587446bcc5-b8k9n,Uid:1831bbdd-6642-47ec-b6ce-03f07d23d2da,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4c2280c985e400c46aca3d5d38baec111ef82fb218422b81d9961effe589f2bd\"" Dec 16 12:30:39.842203 containerd[1906]: time="2025-12-16T12:30:39.842063418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:30:39.858518 systemd[1]: Started cri-containerd-b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2.scope - libcontainer container b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2. Dec 16 12:30:39.858700 systemd-networkd[1469]: cali63e712fb198: Link UP Dec 16 12:30:39.860842 systemd-networkd[1469]: cali63e712fb198: Gained carrier Dec 16 12:30:39.880823 containerd[1906]: 2025-12-16 12:30:39.560 [INFO][4844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-eth0 goldmane-666569f655- calico-system b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb 825 0 2025-12-16 12:30:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.2-a-e780e4b687 goldmane-666569f655-dcszz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali63e712fb198 [] [] }} ContainerID="3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" Namespace="calico-system" Pod="goldmane-666569f655-dcszz" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-" Dec 16 12:30:39.880823 containerd[1906]: 2025-12-16 12:30:39.560 [INFO][4844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" Namespace="calico-system" Pod="goldmane-666569f655-dcszz" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-eth0" Dec 16 12:30:39.880823 containerd[1906]: 2025-12-16 12:30:39.618 [INFO][4877] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" HandleID="k8s-pod-network.3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" Workload="ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-eth0" Dec 16 12:30:39.881050 containerd[1906]: 2025-12-16 12:30:39.619 [INFO][4877] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" HandleID="k8s-pod-network.3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" Workload="ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3890), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-e780e4b687", "pod":"goldmane-666569f655-dcszz", "timestamp":"2025-12-16 12:30:39.618541785 +0000 UTC"}, Hostname:"ci-4459.2.2-a-e780e4b687", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:39.881050 containerd[1906]: 2025-12-16 12:30:39.620 [INFO][4877] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:39.881050 containerd[1906]: 2025-12-16 12:30:39.719 [INFO][4877] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:39.881050 containerd[1906]: 2025-12-16 12:30:39.720 [INFO][4877] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-e780e4b687' Dec 16 12:30:39.881050 containerd[1906]: 2025-12-16 12:30:39.785 [INFO][4877] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.881050 containerd[1906]: 2025-12-16 12:30:39.801 [INFO][4877] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.881050 containerd[1906]: 2025-12-16 12:30:39.809 [INFO][4877] ipam/ipam.go 511: Trying affinity for 192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.881050 containerd[1906]: 2025-12-16 12:30:39.811 [INFO][4877] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.881050 containerd[1906]: 2025-12-16 12:30:39.815 [INFO][4877] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.881384 containerd[1906]: 2025-12-16 12:30:39.815 [INFO][4877] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.128/26 handle="k8s-pod-network.3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.881384 containerd[1906]: 2025-12-16 12:30:39.817 [INFO][4877] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971 Dec 16 12:30:39.881384 containerd[1906]: 2025-12-16 12:30:39.823 [INFO][4877] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.128/26 handle="k8s-pod-network.3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.881384 containerd[1906]: 2025-12-16 12:30:39.834 [INFO][4877] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.132/26] block=192.168.7.128/26 handle="k8s-pod-network.3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.881384 containerd[1906]: 2025-12-16 12:30:39.835 [INFO][4877] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.132/26] handle="k8s-pod-network.3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:39.881384 containerd[1906]: 2025-12-16 12:30:39.835 [INFO][4877] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:39.881384 containerd[1906]: 2025-12-16 12:30:39.835 [INFO][4877] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.132/26] IPv6=[] ContainerID="3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" HandleID="k8s-pod-network.3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" Workload="ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-eth0" Dec 16 12:30:39.882255 containerd[1906]: 2025-12-16 12:30:39.844 [INFO][4844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" Namespace="calico-system" Pod="goldmane-666569f655-dcszz" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"", Pod:"goldmane-666569f655-dcszz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.7.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63e712fb198", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:39.882307 containerd[1906]: 2025-12-16 12:30:39.845 [INFO][4844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.132/32] ContainerID="3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" Namespace="calico-system" Pod="goldmane-666569f655-dcszz" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-eth0" Dec 16 12:30:39.882307 containerd[1906]: 2025-12-16 12:30:39.845 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63e712fb198 ContainerID="3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" Namespace="calico-system" Pod="goldmane-666569f655-dcszz" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-eth0" Dec 16 12:30:39.882307 containerd[1906]: 2025-12-16 12:30:39.860 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" Namespace="calico-system" Pod="goldmane-666569f655-dcszz" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-eth0" Dec 16 12:30:39.882423 containerd[1906]: 2025-12-16 12:30:39.861 [INFO][4844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" Namespace="calico-system" Pod="goldmane-666569f655-dcszz" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971", Pod:"goldmane-666569f655-dcszz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.7.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali63e712fb198", MAC:"e2:4c:87:05:d5:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:39.882475 containerd[1906]: 2025-12-16 12:30:39.876 [INFO][4844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" Namespace="calico-system" Pod="goldmane-666569f655-dcszz" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-goldmane--666569f655--dcszz-eth0" Dec 16 12:30:40.021009 containerd[1906]: time="2025-12-16T12:30:40.020889713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cd89997db-lnfvm,Uid:d50bef5a-6902-49f4-92df-f935afcbb9ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"b86175fd7636bc21d599e62dfcdc0974cb8aa0e58a61aa492bf7f76a4a30ddd2\"" Dec 16 12:30:40.057862 containerd[1906]: time="2025-12-16T12:30:40.057772979Z" level=info msg="connecting to shim 3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971" address="unix:///run/containerd/s/41de7ff6dda79f03c1a38a7db2bd81fc3fae98019ced3528fcb85e45af216f03" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:40.086452 systemd[1]: Started cri-containerd-3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971.scope - libcontainer container 3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971. Dec 16 12:30:40.142829 containerd[1906]: time="2025-12-16T12:30:40.142723400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dcszz,Uid:b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d81a53d39325fc4afd2d12d890f80867ee4cf670690142ad5b24196ceb86971\"" Dec 16 12:30:40.175344 containerd[1906]: time="2025-12-16T12:30:40.175297632Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:40.179082 containerd[1906]: time="2025-12-16T12:30:40.179012331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:30:40.179082 containerd[1906]: time="2025-12-16T12:30:40.179053620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:40.179379 kubelet[3427]: E1216 12:30:40.179286 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:40.179379 kubelet[3427]: E1216 12:30:40.179338 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:40.181624 containerd[1906]: time="2025-12-16T12:30:40.181293855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:30:40.181709 kubelet[3427]: E1216 12:30:40.179998 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrffh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587446bcc5-b8k9n_calico-apiserver(1831bbdd-6642-47ec-b6ce-03f07d23d2da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:40.182944 kubelet[3427]: E1216 12:30:40.182908 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" podUID="1831bbdd-6642-47ec-b6ce-03f07d23d2da" Dec 16 12:30:40.464355 containerd[1906]: time="2025-12-16T12:30:40.464304897Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:40.469744 containerd[1906]: time="2025-12-16T12:30:40.469698384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:30:40.469822 containerd[1906]: time="2025-12-16T12:30:40.469798579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:30:40.470030 kubelet[3427]: E1216 12:30:40.469980 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:30:40.470092 kubelet[3427]: E1216 12:30:40.470037 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:30:40.470373 containerd[1906]: time="2025-12-16T12:30:40.470343378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:30:40.471279 kubelet[3427]: E1216 12:30:40.471222 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2c5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cd89997db-lnfvm_calico-system(d50bef5a-6902-49f4-92df-f935afcbb9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:40.472564 kubelet[3427]: E1216 12:30:40.472517 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" podUID="d50bef5a-6902-49f4-92df-f935afcbb9ff" Dec 16 12:30:40.476504 containerd[1906]: time="2025-12-16T12:30:40.475866084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587446bcc5-m2zzh,Uid:042d90c0-42d8-409c-add9-7c678aa9ba3e,Namespace:calico-apiserver,Attempt:0,}" Dec 16 12:30:40.476899 containerd[1906]: time="2025-12-16T12:30:40.476878351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gmt2x,Uid:5d133913-fb0a-455b-afab-c0825f0f11d8,Namespace:calico-system,Attempt:0,}" Dec 16 12:30:40.605485 systemd-networkd[1469]: cali0331bb9fbe8: Link UP Dec 16 12:30:40.605656 systemd-networkd[1469]: cali0331bb9fbe8: Gained carrier Dec 16 12:30:40.642112 kubelet[3427]: E1216 12:30:40.642079 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" podUID="d50bef5a-6902-49f4-92df-f935afcbb9ff" Dec 16 12:30:40.643815 containerd[1906]: 2025-12-16 12:30:40.534 [INFO][5059] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-eth0 calico-apiserver-587446bcc5- calico-apiserver 042d90c0-42d8-409c-add9-7c678aa9ba3e 826 0 2025-12-16 12:30:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:587446bcc5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.2-a-e780e4b687 calico-apiserver-587446bcc5-m2zzh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0331bb9fbe8 [] [] }} ContainerID="b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-m2zzh" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-" Dec 16 12:30:40.643815 containerd[1906]: 2025-12-16 12:30:40.534 [INFO][5059] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-m2zzh" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-eth0" Dec 16 12:30:40.643815 containerd[1906]: 2025-12-16 12:30:40.560 [INFO][5083] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" HandleID="k8s-pod-network.b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" Workload="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-eth0" Dec 16 12:30:40.644103 containerd[1906]: 2025-12-16 12:30:40.561 [INFO][5083] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" HandleID="k8s-pod-network.b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" Workload="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.2-a-e780e4b687", "pod":"calico-apiserver-587446bcc5-m2zzh", "timestamp":"2025-12-16 12:30:40.560859546 +0000 UTC"}, Hostname:"ci-4459.2.2-a-e780e4b687", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:40.644103 containerd[1906]: 2025-12-16 12:30:40.561 [INFO][5083] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:40.644103 containerd[1906]: 2025-12-16 12:30:40.561 [INFO][5083] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:40.644103 containerd[1906]: 2025-12-16 12:30:40.561 [INFO][5083] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-e780e4b687' Dec 16 12:30:40.644103 containerd[1906]: 2025-12-16 12:30:40.566 [INFO][5083] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.644103 containerd[1906]: 2025-12-16 12:30:40.569 [INFO][5083] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.644103 containerd[1906]: 2025-12-16 12:30:40.572 [INFO][5083] ipam/ipam.go 511: Trying affinity for 192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.644103 containerd[1906]: 2025-12-16 12:30:40.574 [INFO][5083] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.644103 containerd[1906]: 2025-12-16 12:30:40.575 [INFO][5083] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.645456 containerd[1906]: 2025-12-16 12:30:40.575 [INFO][5083] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.128/26 handle="k8s-pod-network.b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.645456 containerd[1906]: 2025-12-16 12:30:40.576 [INFO][5083] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0 Dec 16 12:30:40.645456 containerd[1906]: 2025-12-16 12:30:40.585 [INFO][5083] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.128/26 handle="k8s-pod-network.b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.645456 containerd[1906]: 2025-12-16 12:30:40.595 [INFO][5083] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.133/26] block=192.168.7.128/26 handle="k8s-pod-network.b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.645456 containerd[1906]: 2025-12-16 12:30:40.595 [INFO][5083] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.133/26] handle="k8s-pod-network.b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.645456 containerd[1906]: 2025-12-16 12:30:40.595 [INFO][5083] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:40.645456 containerd[1906]: 2025-12-16 12:30:40.595 [INFO][5083] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.133/26] IPv6=[] ContainerID="b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" HandleID="k8s-pod-network.b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" Workload="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-eth0" Dec 16 12:30:40.645559 containerd[1906]: 2025-12-16 12:30:40.600 [INFO][5059] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-m2zzh" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-eth0", GenerateName:"calico-apiserver-587446bcc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"042d90c0-42d8-409c-add9-7c678aa9ba3e", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587446bcc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"", Pod:"calico-apiserver-587446bcc5-m2zzh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0331bb9fbe8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:40.645599 containerd[1906]: 2025-12-16 12:30:40.600 [INFO][5059] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.133/32] ContainerID="b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-m2zzh" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-eth0" Dec 16 12:30:40.645599 containerd[1906]: 2025-12-16 12:30:40.600 [INFO][5059] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0331bb9fbe8 ContainerID="b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-m2zzh" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-eth0" Dec 16 12:30:40.645599 containerd[1906]: 2025-12-16 12:30:40.605 [INFO][5059] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-m2zzh" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-eth0" Dec 16 12:30:40.645642 containerd[1906]: 2025-12-16 12:30:40.606 [INFO][5059] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-m2zzh" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-eth0", GenerateName:"calico-apiserver-587446bcc5-", Namespace:"calico-apiserver", SelfLink:"", UID:"042d90c0-42d8-409c-add9-7c678aa9ba3e", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"587446bcc5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0", Pod:"calico-apiserver-587446bcc5-m2zzh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.7.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0331bb9fbe8", MAC:"3e:dc:fd:ec:9f:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:40.645677 containerd[1906]: 2025-12-16 12:30:40.636 [INFO][5059] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" Namespace="calico-apiserver" Pod="calico-apiserver-587446bcc5-m2zzh" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-calico--apiserver--587446bcc5--m2zzh-eth0" Dec 16 12:30:40.645958 kubelet[3427]: E1216 12:30:40.645822 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" podUID="1831bbdd-6642-47ec-b6ce-03f07d23d2da" Dec 16 12:30:40.712093 containerd[1906]: time="2025-12-16T12:30:40.712018091Z" level=info msg="connecting to shim b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0" address="unix:///run/containerd/s/5375009a34fd82f0437a7c7bf1892ed300292bdbc79551b0dcb8c334fcdcecac" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:40.730852 systemd-networkd[1469]: cali1f8cf098104: Link UP Dec 16 12:30:40.733319 systemd-networkd[1469]: cali1f8cf098104: Gained carrier Dec 16 12:30:40.736312 containerd[1906]: time="2025-12-16T12:30:40.736280279Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:40.740281 containerd[1906]: time="2025-12-16T12:30:40.740253696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:40.740725 containerd[1906]: time="2025-12-16T12:30:40.740515423Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:30:40.744678 kubelet[3427]: E1216 12:30:40.744323 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:40.744678 kubelet[3427]: E1216 12:30:40.744361 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:40.744678 kubelet[3427]: E1216 12:30:40.744460 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wv688,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dcszz_calico-system(b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:40.745769 kubelet[3427]: E1216 12:30:40.745729 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dcszz" podUID="b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb" Dec 16 12:30:40.752308 systemd[1]: Started cri-containerd-b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0.scope - libcontainer container b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0. Dec 16 12:30:40.762042 containerd[1906]: 2025-12-16 12:30:40.537 [INFO][5063] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-eth0 csi-node-driver- calico-system 5d133913-fb0a-455b-afab-c0825f0f11d8 713 0 2025-12-16 12:30:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.2-a-e780e4b687 csi-node-driver-gmt2x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1f8cf098104 [] [] }} ContainerID="4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" Namespace="calico-system" Pod="csi-node-driver-gmt2x" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-" Dec 16 12:30:40.762042 containerd[1906]: 2025-12-16 12:30:40.537 [INFO][5063] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" Namespace="calico-system" Pod="csi-node-driver-gmt2x" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-eth0" Dec 16 12:30:40.762042 containerd[1906]: 2025-12-16 12:30:40.562 [INFO][5086] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" HandleID="k8s-pod-network.4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" Workload="ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-eth0" Dec 16 12:30:40.762235 containerd[1906]: 2025-12-16 12:30:40.562 [INFO][5086] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" HandleID="k8s-pod-network.4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" Workload="ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.2-a-e780e4b687", "pod":"csi-node-driver-gmt2x", "timestamp":"2025-12-16 12:30:40.562601025 +0000 UTC"}, Hostname:"ci-4459.2.2-a-e780e4b687", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:40.762235 containerd[1906]: 2025-12-16 12:30:40.562 [INFO][5086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:40.762235 containerd[1906]: 2025-12-16 12:30:40.595 [INFO][5086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:40.762235 containerd[1906]: 2025-12-16 12:30:40.595 [INFO][5086] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-e780e4b687' Dec 16 12:30:40.762235 containerd[1906]: 2025-12-16 12:30:40.673 [INFO][5086] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.762235 containerd[1906]: 2025-12-16 12:30:40.681 [INFO][5086] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.762235 containerd[1906]: 2025-12-16 12:30:40.688 [INFO][5086] ipam/ipam.go 511: Trying affinity for 192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.762235 containerd[1906]: 2025-12-16 12:30:40.690 [INFO][5086] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.762235 containerd[1906]: 2025-12-16 12:30:40.693 [INFO][5086] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.763012 containerd[1906]: 2025-12-16 12:30:40.693 [INFO][5086] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.128/26 handle="k8s-pod-network.4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.763012 containerd[1906]: 2025-12-16 12:30:40.695 [INFO][5086] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4 Dec 16 12:30:40.763012 containerd[1906]: 2025-12-16 12:30:40.708 [INFO][5086] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.128/26 handle="k8s-pod-network.4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.763012 containerd[1906]: 2025-12-16 12:30:40.720 [INFO][5086] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.134/26] block=192.168.7.128/26 handle="k8s-pod-network.4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.763012 containerd[1906]: 2025-12-16 12:30:40.720 [INFO][5086] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.134/26] handle="k8s-pod-network.4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:40.763012 containerd[1906]: 2025-12-16 12:30:40.720 [INFO][5086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:40.763012 containerd[1906]: 2025-12-16 12:30:40.720 [INFO][5086] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.134/26] IPv6=[] ContainerID="4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" HandleID="k8s-pod-network.4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" Workload="ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-eth0" Dec 16 12:30:40.763118 containerd[1906]: 2025-12-16 12:30:40.724 [INFO][5063] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" Namespace="calico-system" Pod="csi-node-driver-gmt2x" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5d133913-fb0a-455b-afab-c0825f0f11d8", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"", Pod:"csi-node-driver-gmt2x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.7.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f8cf098104", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:40.763186 containerd[1906]: 2025-12-16 12:30:40.725 [INFO][5063] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.134/32] ContainerID="4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" Namespace="calico-system" Pod="csi-node-driver-gmt2x" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-eth0" Dec 16 12:30:40.763186 containerd[1906]: 2025-12-16 12:30:40.725 [INFO][5063] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f8cf098104 ContainerID="4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" Namespace="calico-system" Pod="csi-node-driver-gmt2x" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-eth0" Dec 16 12:30:40.763186 containerd[1906]: 2025-12-16 12:30:40.730 [INFO][5063] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" Namespace="calico-system" Pod="csi-node-driver-gmt2x" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-eth0" Dec 16 12:30:40.763229 containerd[1906]: 2025-12-16 12:30:40.732 [INFO][5063] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" Namespace="calico-system" Pod="csi-node-driver-gmt2x" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5d133913-fb0a-455b-afab-c0825f0f11d8", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 30, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4", Pod:"csi-node-driver-gmt2x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.7.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f8cf098104", MAC:"be:27:cb:30:58:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:40.763266 containerd[1906]: 2025-12-16 12:30:40.756 [INFO][5063] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" Namespace="calico-system" Pod="csi-node-driver-gmt2x" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-csi--node--driver--gmt2x-eth0" Dec 16 12:30:40.807318 containerd[1906]: time="2025-12-16T12:30:40.807088101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-587446bcc5-m2zzh,Uid:042d90c0-42d8-409c-add9-7c678aa9ba3e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b2a11a4da7de2e26477f94a269f609a591c7802184869eadbc47637969d775d0\"" Dec 16 12:30:40.809466 containerd[1906]: time="2025-12-16T12:30:40.809211133Z" level=info msg="connecting to shim 4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4" address="unix:///run/containerd/s/2f56586daf1f60d6e5418a1fcb21d0948cfee5236e6d7fbfd437c83de18e2661" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:40.810495 containerd[1906]: time="2025-12-16T12:30:40.810472007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:30:40.831318 systemd[1]: Started cri-containerd-4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4.scope - libcontainer container 4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4. Dec 16 12:30:40.857051 containerd[1906]: time="2025-12-16T12:30:40.857013001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gmt2x,Uid:5d133913-fb0a-455b-afab-c0825f0f11d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"4eda6658af9c2dadf0efbeebf76590052d74c290fb8e0cce90ea74fbb9b13da4\"" Dec 16 12:30:40.935319 systemd-networkd[1469]: cali286450e826b: Gained IPv6LL Dec 16 12:30:41.063343 systemd-networkd[1469]: cali74083f93f4b: Gained IPv6LL Dec 16 12:30:41.086002 containerd[1906]: time="2025-12-16T12:30:41.085905256Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:41.089967 containerd[1906]: time="2025-12-16T12:30:41.089851393Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:30:41.089967 containerd[1906]: time="2025-12-16T12:30:41.089938379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:41.090084 kubelet[3427]: E1216 12:30:41.090043 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:41.090116 kubelet[3427]: E1216 12:30:41.090091 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:41.091212 kubelet[3427]: E1216 12:30:41.090373 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqnqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587446bcc5-m2zzh_calico-apiserver(042d90c0-42d8-409c-add9-7c678aa9ba3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:41.091355 containerd[1906]: time="2025-12-16T12:30:41.090421344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:30:41.091552 kubelet[3427]: E1216 12:30:41.091499 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" podUID="042d90c0-42d8-409c-add9-7c678aa9ba3e" Dec 16 12:30:41.350489 containerd[1906]: time="2025-12-16T12:30:41.350440113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:41.362322 containerd[1906]: time="2025-12-16T12:30:41.362273770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:30:41.362431 containerd[1906]: time="2025-12-16T12:30:41.362409966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:30:41.362801 kubelet[3427]: E1216 12:30:41.362592 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:41.362801 kubelet[3427]: E1216 12:30:41.362643 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:41.362801 kubelet[3427]: E1216 12:30:41.362759 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6lgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gmt2x_calico-system(5d133913-fb0a-455b-afab-c0825f0f11d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:41.364807 containerd[1906]: time="2025-12-16T12:30:41.364776917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:30:41.475877 containerd[1906]: time="2025-12-16T12:30:41.475769443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vbkjn,Uid:5bd00760-7369-4a4e-8d38-12395d49edf0,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:41.572998 systemd-networkd[1469]: cali2ed9fbe70b3: Link UP Dec 16 12:30:41.574108 systemd-networkd[1469]: cali2ed9fbe70b3: Gained carrier Dec 16 12:30:41.594062 containerd[1906]: 2025-12-16 12:30:41.516 [INFO][5212] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-eth0 coredns-674b8bbfcf- kube-system 5bd00760-7369-4a4e-8d38-12395d49edf0 820 0 2025-12-16 12:29:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-a-e780e4b687 coredns-674b8bbfcf-vbkjn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ed9fbe70b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" Namespace="kube-system" Pod="coredns-674b8bbfcf-vbkjn" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-" Dec 16 12:30:41.594062 containerd[1906]: 2025-12-16 12:30:41.516 [INFO][5212] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" Namespace="kube-system" Pod="coredns-674b8bbfcf-vbkjn" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-eth0" Dec 16 12:30:41.594062 containerd[1906]: 2025-12-16 12:30:41.534 [INFO][5223] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" HandleID="k8s-pod-network.de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" Workload="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-eth0" Dec 16 12:30:41.595435 containerd[1906]: 2025-12-16 12:30:41.535 [INFO][5223] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" HandleID="k8s-pod-network.de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" Workload="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-a-e780e4b687", "pod":"coredns-674b8bbfcf-vbkjn", "timestamp":"2025-12-16 12:30:41.534793315 +0000 UTC"}, Hostname:"ci-4459.2.2-a-e780e4b687", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:41.595435 containerd[1906]: 2025-12-16 12:30:41.535 [INFO][5223] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:41.595435 containerd[1906]: 2025-12-16 12:30:41.535 [INFO][5223] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:41.595435 containerd[1906]: 2025-12-16 12:30:41.535 [INFO][5223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-e780e4b687' Dec 16 12:30:41.595435 containerd[1906]: 2025-12-16 12:30:41.541 [INFO][5223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:41.595435 containerd[1906]: 2025-12-16 12:30:41.544 [INFO][5223] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:41.595435 containerd[1906]: 2025-12-16 12:30:41.547 [INFO][5223] ipam/ipam.go 511: Trying affinity for 192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:41.595435 containerd[1906]: 2025-12-16 12:30:41.549 [INFO][5223] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:41.595435 containerd[1906]: 2025-12-16 12:30:41.551 [INFO][5223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:41.596068 containerd[1906]: 2025-12-16 12:30:41.551 [INFO][5223] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.128/26 handle="k8s-pod-network.de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:41.596068 containerd[1906]: 2025-12-16 12:30:41.552 [INFO][5223] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59 Dec 16 12:30:41.596068 containerd[1906]: 2025-12-16 12:30:41.558 [INFO][5223] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.128/26 handle="k8s-pod-network.de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:41.596068 containerd[1906]: 2025-12-16 12:30:41.567 [INFO][5223] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.135/26] block=192.168.7.128/26 handle="k8s-pod-network.de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:41.596068 containerd[1906]: 2025-12-16 12:30:41.567 [INFO][5223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.135/26] handle="k8s-pod-network.de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:41.596068 containerd[1906]: 2025-12-16 12:30:41.567 [INFO][5223] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:41.596068 containerd[1906]: 2025-12-16 12:30:41.567 [INFO][5223] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.135/26] IPv6=[] ContainerID="de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" HandleID="k8s-pod-network.de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" Workload="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-eth0" Dec 16 12:30:41.596195 containerd[1906]: 2025-12-16 12:30:41.569 [INFO][5212] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" Namespace="kube-system" Pod="coredns-674b8bbfcf-vbkjn" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5bd00760-7369-4a4e-8d38-12395d49edf0", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"", Pod:"coredns-674b8bbfcf-vbkjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ed9fbe70b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:41.596195 containerd[1906]: 2025-12-16 12:30:41.569 [INFO][5212] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.135/32] ContainerID="de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" Namespace="kube-system" Pod="coredns-674b8bbfcf-vbkjn" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-eth0" Dec 16 12:30:41.596195 containerd[1906]: 2025-12-16 12:30:41.570 [INFO][5212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ed9fbe70b3 ContainerID="de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" Namespace="kube-system" Pod="coredns-674b8bbfcf-vbkjn" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-eth0" Dec 16 12:30:41.596195 containerd[1906]: 2025-12-16 12:30:41.574 [INFO][5212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" Namespace="kube-system" Pod="coredns-674b8bbfcf-vbkjn" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-eth0" Dec 16 12:30:41.596195 containerd[1906]: 2025-12-16 12:30:41.575 [INFO][5212] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" Namespace="kube-system" Pod="coredns-674b8bbfcf-vbkjn" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5bd00760-7369-4a4e-8d38-12395d49edf0", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59", Pod:"coredns-674b8bbfcf-vbkjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ed9fbe70b3", MAC:"62:3e:1f:11:21:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:41.596195 containerd[1906]: 2025-12-16 12:30:41.590 [INFO][5212] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" Namespace="kube-system" Pod="coredns-674b8bbfcf-vbkjn" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--vbkjn-eth0" Dec 16 12:30:41.640343 containerd[1906]: time="2025-12-16T12:30:41.639712703Z" level=info msg="connecting to shim de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59" address="unix:///run/containerd/s/c22324914d8951c13cd7e5c4b9c9ea12acae759a8f1e9739f0622276fca640c2" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:41.653959 kubelet[3427]: E1216 12:30:41.653286 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" podUID="042d90c0-42d8-409c-add9-7c678aa9ba3e" Dec 16 12:30:41.653959 kubelet[3427]: E1216 12:30:41.653604 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" podUID="d50bef5a-6902-49f4-92df-f935afcbb9ff" Dec 16 12:30:41.653959 kubelet[3427]: E1216 12:30:41.653744 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dcszz" podUID="b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb" Dec 16 12:30:41.657374 kubelet[3427]: E1216 12:30:41.657342 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" podUID="1831bbdd-6642-47ec-b6ce-03f07d23d2da" Dec 16 12:30:41.675238 containerd[1906]: time="2025-12-16T12:30:41.675189125Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:41.680671 containerd[1906]: time="2025-12-16T12:30:41.680570689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:30:41.680671 containerd[1906]: time="2025-12-16T12:30:41.680643827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:30:41.680978 kubelet[3427]: E1216 12:30:41.680934 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:41.681110 kubelet[3427]: E1216 12:30:41.680982 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:41.681603 kubelet[3427]: E1216 12:30:41.681540 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6lgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gmt2x_calico-system(5d133913-fb0a-455b-afab-c0825f0f11d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:41.682392 systemd[1]: Started cri-containerd-de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59.scope - libcontainer container de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59. Dec 16 12:30:41.682755 kubelet[3427]: E1216 12:30:41.682703 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:30:41.739749 containerd[1906]: time="2025-12-16T12:30:41.739302889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vbkjn,Uid:5bd00760-7369-4a4e-8d38-12395d49edf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59\"" Dec 16 12:30:41.748994 containerd[1906]: time="2025-12-16T12:30:41.748948755Z" level=info msg="CreateContainer within sandbox \"de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:30:41.767358 systemd-networkd[1469]: cali1f8cf098104: Gained IPv6LL Dec 16 12:30:41.772005 containerd[1906]: time="2025-12-16T12:30:41.771364367Z" level=info msg="Container 664670142f9f34449156735adc449ea500ce3fb09fbe6943df3a1727168d779c: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:41.784620 containerd[1906]: time="2025-12-16T12:30:41.784579861Z" level=info msg="CreateContainer within sandbox \"de25d684eb01854f3c547a5690e5f81264fab7daf66c41d06962b236b3303e59\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"664670142f9f34449156735adc449ea500ce3fb09fbe6943df3a1727168d779c\"" Dec 16 12:30:41.785477 containerd[1906]: time="2025-12-16T12:30:41.785454356Z" level=info msg="StartContainer for \"664670142f9f34449156735adc449ea500ce3fb09fbe6943df3a1727168d779c\"" Dec 16 12:30:41.786998 containerd[1906]: time="2025-12-16T12:30:41.786874937Z" level=info msg="connecting to shim 664670142f9f34449156735adc449ea500ce3fb09fbe6943df3a1727168d779c" address="unix:///run/containerd/s/c22324914d8951c13cd7e5c4b9c9ea12acae759a8f1e9739f0622276fca640c2" protocol=ttrpc version=3 Dec 16 12:30:41.803373 systemd[1]: Started cri-containerd-664670142f9f34449156735adc449ea500ce3fb09fbe6943df3a1727168d779c.scope - libcontainer container 664670142f9f34449156735adc449ea500ce3fb09fbe6943df3a1727168d779c. Dec 16 12:30:41.834668 containerd[1906]: time="2025-12-16T12:30:41.834628797Z" level=info msg="StartContainer for \"664670142f9f34449156735adc449ea500ce3fb09fbe6943df3a1727168d779c\" returns successfully" Dec 16 12:30:41.895332 systemd-networkd[1469]: cali63e712fb198: Gained IPv6LL Dec 16 12:30:42.484447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3768305139.mount: Deactivated successfully. Dec 16 12:30:42.535327 systemd-networkd[1469]: cali0331bb9fbe8: Gained IPv6LL Dec 16 12:30:42.654858 kubelet[3427]: E1216 12:30:42.654750 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:30:42.655875 kubelet[3427]: E1216 12:30:42.655840 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" podUID="042d90c0-42d8-409c-add9-7c678aa9ba3e" Dec 16 12:30:42.690497 kubelet[3427]: I1216 12:30:42.690364 3427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vbkjn" podStartSLOduration=43.690347879 podStartE2EDuration="43.690347879s" podCreationTimestamp="2025-12-16 12:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:42.689809601 +0000 UTC m=+44.311067198" watchObservedRunningTime="2025-12-16 12:30:42.690347879 +0000 UTC m=+44.311605444" Dec 16 12:30:42.919382 systemd-networkd[1469]: cali2ed9fbe70b3: Gained IPv6LL Dec 16 12:30:43.475995 containerd[1906]: time="2025-12-16T12:30:43.475671642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9s4g,Uid:22fc970d-0749-440b-91ca-bc8521b4622e,Namespace:kube-system,Attempt:0,}" Dec 16 12:30:43.573414 systemd-networkd[1469]: cali1ccee84ae67: Link UP Dec 16 12:30:43.573597 systemd-networkd[1469]: cali1ccee84ae67: Gained carrier Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.511 [INFO][5325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-eth0 coredns-674b8bbfcf- kube-system 22fc970d-0749-440b-91ca-bc8521b4622e 821 0 2025-12-16 12:29:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.2-a-e780e4b687 coredns-674b8bbfcf-r9s4g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1ccee84ae67 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9s4g" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-" Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.511 [INFO][5325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9s4g" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-eth0" Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.530 [INFO][5337] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" HandleID="k8s-pod-network.57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" Workload="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-eth0" Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.530 [INFO][5337] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" HandleID="k8s-pod-network.57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" Workload="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.2-a-e780e4b687", "pod":"coredns-674b8bbfcf-r9s4g", "timestamp":"2025-12-16 12:30:43.530287848 +0000 UTC"}, Hostname:"ci-4459.2.2-a-e780e4b687", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.530 [INFO][5337] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.530 [INFO][5337] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.530 [INFO][5337] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.2-a-e780e4b687' Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.535 [INFO][5337] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.539 [INFO][5337] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.542 [INFO][5337] ipam/ipam.go 511: Trying affinity for 192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.543 [INFO][5337] ipam/ipam.go 158: Attempting to load block cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.545 [INFO][5337] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.7.128/26 host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.545 [INFO][5337] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.7.128/26 handle="k8s-pod-network.57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.546 [INFO][5337] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44 Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.554 [INFO][5337] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.7.128/26 handle="k8s-pod-network.57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.561 [INFO][5337] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.7.136/26] block=192.168.7.128/26 handle="k8s-pod-network.57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.562 [INFO][5337] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.7.136/26] handle="k8s-pod-network.57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" host="ci-4459.2.2-a-e780e4b687" Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.562 [INFO][5337] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 16 12:30:43.588328 containerd[1906]: 2025-12-16 12:30:43.562 [INFO][5337] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.7.136/26] IPv6=[] ContainerID="57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" HandleID="k8s-pod-network.57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" Workload="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-eth0" Dec 16 12:30:43.589143 containerd[1906]: 2025-12-16 12:30:43.565 [INFO][5325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9s4g" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"22fc970d-0749-440b-91ca-bc8521b4622e", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"", Pod:"coredns-674b8bbfcf-r9s4g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ccee84ae67", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:43.589143 containerd[1906]: 2025-12-16 12:30:43.565 [INFO][5325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.7.136/32] ContainerID="57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9s4g" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-eth0" Dec 16 12:30:43.589143 containerd[1906]: 2025-12-16 12:30:43.565 [INFO][5325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ccee84ae67 ContainerID="57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9s4g" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-eth0" Dec 16 12:30:43.589143 containerd[1906]: 2025-12-16 12:30:43.573 [INFO][5325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9s4g" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-eth0" Dec 16 12:30:43.589143 containerd[1906]: 2025-12-16 12:30:43.574 [INFO][5325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9s4g" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"22fc970d-0749-440b-91ca-bc8521b4622e", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.December, 16, 12, 29, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.2-a-e780e4b687", ContainerID:"57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44", Pod:"coredns-674b8bbfcf-r9s4g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.7.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1ccee84ae67", MAC:"9e:49:8c:92:9a:dc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 16 12:30:43.589143 containerd[1906]: 2025-12-16 12:30:43.584 [INFO][5325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" Namespace="kube-system" Pod="coredns-674b8bbfcf-r9s4g" WorkloadEndpoint="ci--4459.2.2--a--e780e4b687-k8s-coredns--674b8bbfcf--r9s4g-eth0" Dec 16 12:30:43.647956 containerd[1906]: time="2025-12-16T12:30:43.647589005Z" level=info msg="connecting to shim 57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44" address="unix:///run/containerd/s/a3267937977feff848e991bfb98a5ab1f5e2c776d77e0e5410841ea15f0c1b16" namespace=k8s.io protocol=ttrpc version=3 Dec 16 12:30:43.667318 systemd[1]: Started cri-containerd-57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44.scope - libcontainer container 57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44. Dec 16 12:30:43.701994 containerd[1906]: time="2025-12-16T12:30:43.701944612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r9s4g,Uid:22fc970d-0749-440b-91ca-bc8521b4622e,Namespace:kube-system,Attempt:0,} returns sandbox id \"57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44\"" Dec 16 12:30:43.711205 containerd[1906]: time="2025-12-16T12:30:43.711125802Z" level=info msg="CreateContainer within sandbox \"57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 12:30:43.738240 containerd[1906]: time="2025-12-16T12:30:43.736663767Z" level=info msg="Container dd02d6de66f1fb6b312aed8e9c8f838da42893532a7298a93ef2834cb451a35f: CDI devices from CRI Config.CDIDevices: []" Dec 16 12:30:43.741923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount867426299.mount: Deactivated successfully. Dec 16 12:30:43.755764 containerd[1906]: time="2025-12-16T12:30:43.755647571Z" level=info msg="CreateContainer within sandbox \"57d01d9efa2e435c20bb018e79c8325f23d90305d7866487370ca1f4bdb58f44\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dd02d6de66f1fb6b312aed8e9c8f838da42893532a7298a93ef2834cb451a35f\"" Dec 16 12:30:43.756458 containerd[1906]: time="2025-12-16T12:30:43.756429007Z" level=info msg="StartContainer for \"dd02d6de66f1fb6b312aed8e9c8f838da42893532a7298a93ef2834cb451a35f\"" Dec 16 12:30:43.757719 containerd[1906]: time="2025-12-16T12:30:43.757696888Z" level=info msg="connecting to shim dd02d6de66f1fb6b312aed8e9c8f838da42893532a7298a93ef2834cb451a35f" address="unix:///run/containerd/s/a3267937977feff848e991bfb98a5ab1f5e2c776d77e0e5410841ea15f0c1b16" protocol=ttrpc version=3 Dec 16 12:30:43.780415 systemd[1]: Started cri-containerd-dd02d6de66f1fb6b312aed8e9c8f838da42893532a7298a93ef2834cb451a35f.scope - libcontainer container dd02d6de66f1fb6b312aed8e9c8f838da42893532a7298a93ef2834cb451a35f. Dec 16 12:30:43.819749 containerd[1906]: time="2025-12-16T12:30:43.819660492Z" level=info msg="StartContainer for \"dd02d6de66f1fb6b312aed8e9c8f838da42893532a7298a93ef2834cb451a35f\" returns successfully" Dec 16 12:30:44.674642 kubelet[3427]: I1216 12:30:44.674568 3427 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r9s4g" podStartSLOduration=45.674554968 podStartE2EDuration="45.674554968s" podCreationTimestamp="2025-12-16 12:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 12:30:44.673421523 +0000 UTC m=+46.294679096" watchObservedRunningTime="2025-12-16 12:30:44.674554968 +0000 UTC m=+46.295812533" Dec 16 12:30:45.095357 systemd-networkd[1469]: cali1ccee84ae67: Gained IPv6LL Dec 16 12:30:50.477187 containerd[1906]: time="2025-12-16T12:30:50.477062058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:30:50.743449 containerd[1906]: time="2025-12-16T12:30:50.743108984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:50.746748 containerd[1906]: time="2025-12-16T12:30:50.746696722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:30:50.746877 containerd[1906]: time="2025-12-16T12:30:50.746789788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:30:50.746988 kubelet[3427]: E1216 12:30:50.746951 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:30:50.747313 kubelet[3427]: E1216 12:30:50.747003 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:30:50.747313 kubelet[3427]: E1216 12:30:50.747108 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d4d5681624114e43967285ec59e0b032,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t4pzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65489c7c88-dxnd9_calico-system(05c07d81-bd4b-4334-984c-fd96a5a648fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:50.749778 containerd[1906]: time="2025-12-16T12:30:50.749707844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:30:51.030876 containerd[1906]: time="2025-12-16T12:30:51.030743250Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:51.034336 containerd[1906]: time="2025-12-16T12:30:51.034292571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:30:51.034426 containerd[1906]: time="2025-12-16T12:30:51.034384525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:30:51.034608 kubelet[3427]: E1216 12:30:51.034563 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:30:51.034662 kubelet[3427]: E1216 12:30:51.034629 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:30:51.035621 kubelet[3427]: E1216 12:30:51.034744 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4pzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65489c7c88-dxnd9_calico-system(05c07d81-bd4b-4334-984c-fd96a5a648fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:51.035881 kubelet[3427]: E1216 12:30:51.035849 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65489c7c88-dxnd9" podUID="05c07d81-bd4b-4334-984c-fd96a5a648fa" Dec 16 12:30:54.477550 containerd[1906]: time="2025-12-16T12:30:54.477480345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:30:54.725797 containerd[1906]: time="2025-12-16T12:30:54.725744697Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:54.729839 containerd[1906]: time="2025-12-16T12:30:54.729739358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:30:54.729839 containerd[1906]: time="2025-12-16T12:30:54.729818192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:30:54.730243 kubelet[3427]: E1216 12:30:54.730135 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:54.730243 kubelet[3427]: E1216 12:30:54.730228 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:30:54.731045 kubelet[3427]: E1216 12:30:54.730730 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6lgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gmt2x_calico-system(5d133913-fb0a-455b-afab-c0825f0f11d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:54.731521 containerd[1906]: time="2025-12-16T12:30:54.731442916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:30:54.993996 containerd[1906]: time="2025-12-16T12:30:54.993866878Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:54.998572 containerd[1906]: time="2025-12-16T12:30:54.998533070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:30:54.998635 containerd[1906]: time="2025-12-16T12:30:54.998622896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:54.998850 kubelet[3427]: E1216 12:30:54.998809 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:54.998892 kubelet[3427]: E1216 12:30:54.998862 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:54.999119 kubelet[3427]: E1216 12:30:54.999076 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrffh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587446bcc5-b8k9n_calico-apiserver(1831bbdd-6642-47ec-b6ce-03f07d23d2da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:54.999464 containerd[1906]: time="2025-12-16T12:30:54.999227081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:30:55.000710 kubelet[3427]: E1216 12:30:55.000674 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" podUID="1831bbdd-6642-47ec-b6ce-03f07d23d2da" Dec 16 12:30:55.248654 containerd[1906]: time="2025-12-16T12:30:55.248495475Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:55.252805 containerd[1906]: time="2025-12-16T12:30:55.252767168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:30:55.252894 containerd[1906]: time="2025-12-16T12:30:55.252849226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:30:55.253031 kubelet[3427]: E1216 12:30:55.252997 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:55.253089 kubelet[3427]: E1216 12:30:55.253045 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:30:55.253219 kubelet[3427]: E1216 12:30:55.253147 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6lgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gmt2x_calico-system(5d133913-fb0a-455b-afab-c0825f0f11d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:55.254383 kubelet[3427]: E1216 12:30:55.254327 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:30:55.476932 containerd[1906]: time="2025-12-16T12:30:55.476074933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:30:55.725589 containerd[1906]: time="2025-12-16T12:30:55.725543709Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:55.729220 containerd[1906]: time="2025-12-16T12:30:55.729178913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:30:55.729309 containerd[1906]: time="2025-12-16T12:30:55.729255611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:30:55.729453 kubelet[3427]: E1216 12:30:55.729396 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:30:55.729567 kubelet[3427]: E1216 12:30:55.729462 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:30:55.729637 kubelet[3427]: E1216 12:30:55.729594 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2c5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cd89997db-lnfvm_calico-system(d50bef5a-6902-49f4-92df-f935afcbb9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:55.730720 kubelet[3427]: E1216 12:30:55.730691 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" podUID="d50bef5a-6902-49f4-92df-f935afcbb9ff" Dec 16 12:30:57.475950 containerd[1906]: time="2025-12-16T12:30:57.475835572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:30:57.735186 containerd[1906]: time="2025-12-16T12:30:57.735043736Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:57.738524 containerd[1906]: time="2025-12-16T12:30:57.738476598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:30:57.738601 containerd[1906]: time="2025-12-16T12:30:57.738555864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:57.738756 kubelet[3427]: E1216 12:30:57.738712 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:57.739142 kubelet[3427]: E1216 12:30:57.738765 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:30:57.739433 kubelet[3427]: E1216 12:30:57.739273 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wv688,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dcszz_calico-system(b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:57.739554 containerd[1906]: time="2025-12-16T12:30:57.739529562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:30:57.740562 kubelet[3427]: E1216 12:30:57.740525 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dcszz" podUID="b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb" Dec 16 12:30:57.956693 containerd[1906]: time="2025-12-16T12:30:57.956638508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:30:57.959972 containerd[1906]: time="2025-12-16T12:30:57.959938662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:30:57.960034 containerd[1906]: time="2025-12-16T12:30:57.960014912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:30:57.960247 kubelet[3427]: E1216 12:30:57.960203 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:57.960332 kubelet[3427]: E1216 12:30:57.960256 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:30:57.960721 kubelet[3427]: E1216 12:30:57.960385 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqnqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587446bcc5-m2zzh_calico-apiserver(042d90c0-42d8-409c-add9-7c678aa9ba3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:30:57.961975 kubelet[3427]: E1216 12:30:57.961946 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" podUID="042d90c0-42d8-409c-add9-7c678aa9ba3e" Dec 16 12:31:05.477454 kubelet[3427]: E1216 12:31:05.477401 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65489c7c88-dxnd9" podUID="05c07d81-bd4b-4334-984c-fd96a5a648fa" Dec 16 12:31:06.478325 kubelet[3427]: E1216 12:31:06.478243 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" podUID="1831bbdd-6642-47ec-b6ce-03f07d23d2da" Dec 16 12:31:06.480887 kubelet[3427]: E1216 12:31:06.480849 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:31:08.477371 kubelet[3427]: E1216 12:31:08.477191 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" podUID="d50bef5a-6902-49f4-92df-f935afcbb9ff" Dec 16 12:31:09.475686 kubelet[3427]: E1216 12:31:09.475540 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" podUID="042d90c0-42d8-409c-add9-7c678aa9ba3e" Dec 16 12:31:11.475902 kubelet[3427]: E1216 12:31:11.475858 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dcszz" podUID="b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb" Dec 16 12:31:17.475776 containerd[1906]: time="2025-12-16T12:31:17.475692018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:31:17.746725 containerd[1906]: time="2025-12-16T12:31:17.746582570Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:17.751053 containerd[1906]: time="2025-12-16T12:31:17.751008344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:31:17.751203 containerd[1906]: time="2025-12-16T12:31:17.751091490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:31:17.751310 kubelet[3427]: E1216 12:31:17.751268 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:31:17.751589 kubelet[3427]: E1216 12:31:17.751312 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:31:17.751589 kubelet[3427]: E1216 12:31:17.751430 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6lgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gmt2x_calico-system(5d133913-fb0a-455b-afab-c0825f0f11d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:17.754536 containerd[1906]: time="2025-12-16T12:31:17.754508854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:31:18.028264 containerd[1906]: time="2025-12-16T12:31:18.027422747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:18.031759 containerd[1906]: time="2025-12-16T12:31:18.031530369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:31:18.031759 containerd[1906]: time="2025-12-16T12:31:18.031563994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:31:18.033194 kubelet[3427]: E1216 12:31:18.032333 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:31:18.033194 kubelet[3427]: E1216 12:31:18.032385 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:31:18.033194 kubelet[3427]: E1216 12:31:18.032501 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6lgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gmt2x_calico-system(5d133913-fb0a-455b-afab-c0825f0f11d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:18.034556 kubelet[3427]: E1216 12:31:18.034478 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:31:18.479096 containerd[1906]: time="2025-12-16T12:31:18.478902644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:31:18.736687 containerd[1906]: time="2025-12-16T12:31:18.736564778Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:18.741300 containerd[1906]: time="2025-12-16T12:31:18.741252319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:31:18.742097 containerd[1906]: time="2025-12-16T12:31:18.741330281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:31:18.742136 kubelet[3427]: E1216 12:31:18.741454 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:31:18.742136 kubelet[3427]: E1216 12:31:18.741501 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:31:18.742136 kubelet[3427]: E1216 12:31:18.741593 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d4d5681624114e43967285ec59e0b032,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t4pzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65489c7c88-dxnd9_calico-system(05c07d81-bd4b-4334-984c-fd96a5a648fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:18.744064 containerd[1906]: time="2025-12-16T12:31:18.744038394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:31:19.026390 containerd[1906]: time="2025-12-16T12:31:19.026264256Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:19.031768 containerd[1906]: time="2025-12-16T12:31:19.031724826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:31:19.031865 containerd[1906]: time="2025-12-16T12:31:19.031816269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:31:19.032012 kubelet[3427]: E1216 12:31:19.031969 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:31:19.032321 kubelet[3427]: E1216 12:31:19.032023 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:31:19.032321 kubelet[3427]: E1216 12:31:19.032142 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4pzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65489c7c88-dxnd9_calico-system(05c07d81-bd4b-4334-984c-fd96a5a648fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:19.033322 kubelet[3427]: E1216 12:31:19.033271 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65489c7c88-dxnd9" podUID="05c07d81-bd4b-4334-984c-fd96a5a648fa" Dec 16 12:31:20.478377 containerd[1906]: time="2025-12-16T12:31:20.477609666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:31:20.733879 containerd[1906]: time="2025-12-16T12:31:20.733745375Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:20.738220 containerd[1906]: time="2025-12-16T12:31:20.738121980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:31:20.738220 containerd[1906]: time="2025-12-16T12:31:20.738184326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:31:20.738471 kubelet[3427]: E1216 12:31:20.738437 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:31:20.738970 kubelet[3427]: E1216 12:31:20.738770 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:31:20.738970 kubelet[3427]: E1216 12:31:20.738933 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrffh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587446bcc5-b8k9n_calico-apiserver(1831bbdd-6642-47ec-b6ce-03f07d23d2da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:20.740361 kubelet[3427]: E1216 12:31:20.740273 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" podUID="1831bbdd-6642-47ec-b6ce-03f07d23d2da" Dec 16 12:31:22.479070 containerd[1906]: time="2025-12-16T12:31:22.478871961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:31:22.716730 containerd[1906]: time="2025-12-16T12:31:22.716683388Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:22.720487 containerd[1906]: time="2025-12-16T12:31:22.720413327Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:31:22.720487 containerd[1906]: time="2025-12-16T12:31:22.720455944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:31:22.722182 kubelet[3427]: E1216 12:31:22.721691 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:31:22.722182 kubelet[3427]: E1216 12:31:22.721758 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:31:22.722182 kubelet[3427]: E1216 12:31:22.721878 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqnqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587446bcc5-m2zzh_calico-apiserver(042d90c0-42d8-409c-add9-7c678aa9ba3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:22.723380 kubelet[3427]: E1216 12:31:22.723341 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" podUID="042d90c0-42d8-409c-add9-7c678aa9ba3e" Dec 16 12:31:23.476304 containerd[1906]: time="2025-12-16T12:31:23.475811178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:31:23.706044 containerd[1906]: time="2025-12-16T12:31:23.705858896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:23.709607 containerd[1906]: time="2025-12-16T12:31:23.709492600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:31:23.709607 containerd[1906]: time="2025-12-16T12:31:23.709559610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:31:23.709783 kubelet[3427]: E1216 12:31:23.709717 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:31:23.709783 kubelet[3427]: E1216 12:31:23.709780 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:31:23.710004 kubelet[3427]: E1216 12:31:23.709925 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2c5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cd89997db-lnfvm_calico-system(d50bef5a-6902-49f4-92df-f935afcbb9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:23.711291 kubelet[3427]: E1216 12:31:23.711129 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" podUID="d50bef5a-6902-49f4-92df-f935afcbb9ff" Dec 16 12:31:26.475809 containerd[1906]: time="2025-12-16T12:31:26.475674423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:31:26.731365 containerd[1906]: time="2025-12-16T12:31:26.731235817Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:31:26.734788 containerd[1906]: time="2025-12-16T12:31:26.734741982Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:31:26.734875 containerd[1906]: time="2025-12-16T12:31:26.734839641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:31:26.735068 kubelet[3427]: E1216 12:31:26.735026 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:31:26.735623 kubelet[3427]: E1216 12:31:26.735404 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:31:26.735623 kubelet[3427]: E1216 12:31:26.735572 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wv688,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dcszz_calico-system(b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:31:26.736951 kubelet[3427]: E1216 12:31:26.736911 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dcszz" podUID="b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb" Dec 16 12:31:29.479476 kubelet[3427]: E1216 12:31:29.479425 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:31:29.479894 kubelet[3427]: E1216 12:31:29.479513 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65489c7c88-dxnd9" podUID="05c07d81-bd4b-4334-984c-fd96a5a648fa" Dec 16 12:31:36.479030 kubelet[3427]: E1216 12:31:36.478751 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" podUID="1831bbdd-6642-47ec-b6ce-03f07d23d2da" Dec 16 12:31:37.477220 kubelet[3427]: E1216 12:31:37.476693 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" podUID="d50bef5a-6902-49f4-92df-f935afcbb9ff" Dec 16 12:31:37.477581 kubelet[3427]: E1216 12:31:37.477556 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dcszz" podUID="b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb" Dec 16 12:31:37.477642 kubelet[3427]: E1216 12:31:37.477623 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" podUID="042d90c0-42d8-409c-add9-7c678aa9ba3e" Dec 16 12:31:41.477166 kubelet[3427]: E1216 12:31:41.477083 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65489c7c88-dxnd9" podUID="05c07d81-bd4b-4334-984c-fd96a5a648fa" Dec 16 12:31:42.124363 systemd[1]: Started sshd@7-10.200.20.38:22-10.200.16.10:57162.service - OpenSSH per-connection server daemon (10.200.16.10:57162). Dec 16 12:31:42.614906 sshd[5535]: Accepted publickey for core from 10.200.16.10 port 57162 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:42.617395 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:42.623783 systemd-logind[1872]: New session 10 of user core. Dec 16 12:31:42.628294 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 12:31:43.019100 sshd[5538]: Connection closed by 10.200.16.10 port 57162 Dec 16 12:31:43.019975 sshd-session[5535]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:43.023424 systemd[1]: sshd@7-10.200.20.38:22-10.200.16.10:57162.service: Deactivated successfully. Dec 16 12:31:43.025529 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 12:31:43.026508 systemd-logind[1872]: Session 10 logged out. Waiting for processes to exit. Dec 16 12:31:43.027881 systemd-logind[1872]: Removed session 10. Dec 16 12:31:44.479333 kubelet[3427]: E1216 12:31:44.479010 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:31:48.094416 systemd[1]: Started sshd@8-10.200.20.38:22-10.200.16.10:57172.service - OpenSSH per-connection server daemon (10.200.16.10:57172). Dec 16 12:31:48.542140 sshd[5550]: Accepted publickey for core from 10.200.16.10 port 57172 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:48.543777 sshd-session[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:48.550559 systemd-logind[1872]: New session 11 of user core. Dec 16 12:31:48.557394 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 12:31:48.937782 sshd[5553]: Connection closed by 10.200.16.10 port 57172 Dec 16 12:31:48.939361 sshd-session[5550]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:48.942873 systemd[1]: sshd@8-10.200.20.38:22-10.200.16.10:57172.service: Deactivated successfully. Dec 16 12:31:48.947820 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 12:31:48.949163 systemd-logind[1872]: Session 11 logged out. Waiting for processes to exit. Dec 16 12:31:48.951660 systemd-logind[1872]: Removed session 11. Dec 16 12:31:49.475608 kubelet[3427]: E1216 12:31:49.475546 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dcszz" podUID="b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb" Dec 16 12:31:51.475221 kubelet[3427]: E1216 12:31:51.475176 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" podUID="042d90c0-42d8-409c-add9-7c678aa9ba3e" Dec 16 12:31:51.475642 kubelet[3427]: E1216 12:31:51.475469 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" podUID="1831bbdd-6642-47ec-b6ce-03f07d23d2da" Dec 16 12:31:52.478256 kubelet[3427]: E1216 12:31:52.478105 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65489c7c88-dxnd9" podUID="05c07d81-bd4b-4334-984c-fd96a5a648fa" Dec 16 12:31:52.478256 kubelet[3427]: E1216 12:31:52.478199 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" podUID="d50bef5a-6902-49f4-92df-f935afcbb9ff" Dec 16 12:31:54.028378 systemd[1]: Started sshd@9-10.200.20.38:22-10.200.16.10:38580.service - OpenSSH per-connection server daemon (10.200.16.10:38580). Dec 16 12:31:54.483414 sshd[5566]: Accepted publickey for core from 10.200.16.10 port 38580 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:54.484506 sshd-session[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:54.488285 systemd-logind[1872]: New session 12 of user core. Dec 16 12:31:54.496281 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 12:31:54.876942 sshd[5569]: Connection closed by 10.200.16.10 port 38580 Dec 16 12:31:54.877704 sshd-session[5566]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:54.881727 systemd[1]: sshd@9-10.200.20.38:22-10.200.16.10:38580.service: Deactivated successfully. Dec 16 12:31:54.885070 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 12:31:54.886488 systemd-logind[1872]: Session 12 logged out. Waiting for processes to exit. Dec 16 12:31:54.887892 systemd-logind[1872]: Removed session 12. Dec 16 12:31:54.967374 systemd[1]: Started sshd@10-10.200.20.38:22-10.200.16.10:38586.service - OpenSSH per-connection server daemon (10.200.16.10:38586). Dec 16 12:31:55.461701 sshd[5582]: Accepted publickey for core from 10.200.16.10 port 38586 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:55.463682 sshd-session[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:55.470171 systemd-logind[1872]: New session 13 of user core. Dec 16 12:31:55.475317 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 12:31:55.878526 sshd[5585]: Connection closed by 10.200.16.10 port 38586 Dec 16 12:31:55.878872 sshd-session[5582]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:55.883037 systemd[1]: sshd@10-10.200.20.38:22-10.200.16.10:38586.service: Deactivated successfully. Dec 16 12:31:55.886696 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 12:31:55.887765 systemd-logind[1872]: Session 13 logged out. Waiting for processes to exit. Dec 16 12:31:55.890175 systemd-logind[1872]: Removed session 13. Dec 16 12:31:55.968519 systemd[1]: Started sshd@11-10.200.20.38:22-10.200.16.10:38588.service - OpenSSH per-connection server daemon (10.200.16.10:38588). Dec 16 12:31:56.458301 sshd[5601]: Accepted publickey for core from 10.200.16.10 port 38588 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:31:56.459404 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:31:56.463049 systemd-logind[1872]: New session 14 of user core. Dec 16 12:31:56.472298 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 12:31:56.477858 kubelet[3427]: E1216 12:31:56.477817 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:31:56.864232 sshd[5604]: Connection closed by 10.200.16.10 port 38588 Dec 16 12:31:56.865772 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Dec 16 12:31:56.868788 systemd[1]: sshd@11-10.200.20.38:22-10.200.16.10:38588.service: Deactivated successfully. Dec 16 12:31:56.870791 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 12:31:56.873557 systemd-logind[1872]: Session 14 logged out. Waiting for processes to exit. Dec 16 12:31:56.875082 systemd-logind[1872]: Removed session 14. Dec 16 12:32:01.948526 systemd[1]: Started sshd@12-10.200.20.38:22-10.200.16.10:36158.service - OpenSSH per-connection server daemon (10.200.16.10:36158). Dec 16 12:32:02.418251 sshd[5621]: Accepted publickey for core from 10.200.16.10 port 36158 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:02.419383 sshd-session[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:02.423232 systemd-logind[1872]: New session 15 of user core. Dec 16 12:32:02.429282 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 12:32:02.477873 kubelet[3427]: E1216 12:32:02.477001 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dcszz" podUID="b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb" Dec 16 12:32:02.477873 kubelet[3427]: E1216 12:32:02.477053 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" podUID="042d90c0-42d8-409c-add9-7c678aa9ba3e" Dec 16 12:32:02.479865 containerd[1906]: time="2025-12-16T12:32:02.478785597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:32:02.795284 sshd[5624]: Connection closed by 10.200.16.10 port 36158 Dec 16 12:32:02.795804 sshd-session[5621]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:02.799164 systemd[1]: sshd@12-10.200.20.38:22-10.200.16.10:36158.service: Deactivated successfully. Dec 16 12:32:02.800944 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 12:32:02.802565 systemd-logind[1872]: Session 15 logged out. Waiting for processes to exit. Dec 16 12:32:02.803702 containerd[1906]: time="2025-12-16T12:32:02.803530330Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:02.803898 systemd-logind[1872]: Removed session 15. Dec 16 12:32:02.807172 containerd[1906]: time="2025-12-16T12:32:02.807069480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:32:02.807172 containerd[1906]: time="2025-12-16T12:32:02.807121857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:32:02.807321 kubelet[3427]: E1216 12:32:02.807272 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:32:02.807365 kubelet[3427]: E1216 12:32:02.807327 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:32:02.807473 kubelet[3427]: E1216 12:32:02.807434 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrffh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587446bcc5-b8k9n_calico-apiserver(1831bbdd-6642-47ec-b6ce-03f07d23d2da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:02.808866 kubelet[3427]: E1216 12:32:02.808828 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" podUID="1831bbdd-6642-47ec-b6ce-03f07d23d2da" Dec 16 12:32:03.476135 kubelet[3427]: E1216 12:32:03.476073 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" podUID="d50bef5a-6902-49f4-92df-f935afcbb9ff" Dec 16 12:32:05.475669 containerd[1906]: time="2025-12-16T12:32:05.475388960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 16 12:32:05.721997 containerd[1906]: time="2025-12-16T12:32:05.721942300Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:05.730990 containerd[1906]: time="2025-12-16T12:32:05.730884401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 16 12:32:05.731060 containerd[1906]: time="2025-12-16T12:32:05.730996812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 16 12:32:05.731431 kubelet[3427]: E1216 12:32:05.731362 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:32:05.731945 kubelet[3427]: E1216 12:32:05.731422 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 16 12:32:05.731945 kubelet[3427]: E1216 12:32:05.731903 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d4d5681624114e43967285ec59e0b032,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t4pzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65489c7c88-dxnd9_calico-system(05c07d81-bd4b-4334-984c-fd96a5a648fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:05.733939 containerd[1906]: time="2025-12-16T12:32:05.733892233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 16 12:32:06.117427 containerd[1906]: time="2025-12-16T12:32:06.117234048Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:06.120963 containerd[1906]: time="2025-12-16T12:32:06.120850560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 16 12:32:06.120963 containerd[1906]: time="2025-12-16T12:32:06.120937586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 16 12:32:06.121200 kubelet[3427]: E1216 12:32:06.121115 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:32:06.121279 kubelet[3427]: E1216 12:32:06.121213 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 16 12:32:06.121414 kubelet[3427]: E1216 12:32:06.121327 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4pzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-65489c7c88-dxnd9_calico-system(05c07d81-bd4b-4334-984c-fd96a5a648fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:06.122819 kubelet[3427]: E1216 12:32:06.122783 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65489c7c88-dxnd9" podUID="05c07d81-bd4b-4334-984c-fd96a5a648fa" Dec 16 12:32:07.883329 systemd[1]: Started sshd@13-10.200.20.38:22-10.200.16.10:36174.service - OpenSSH per-connection server daemon (10.200.16.10:36174). Dec 16 12:32:08.344022 sshd[5688]: Accepted publickey for core from 10.200.16.10 port 36174 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:08.345658 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:08.350438 systemd-logind[1872]: New session 16 of user core. Dec 16 12:32:08.355286 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 12:32:08.479508 containerd[1906]: time="2025-12-16T12:32:08.479452496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 16 12:32:08.721269 sshd[5691]: Connection closed by 10.200.16.10 port 36174 Dec 16 12:32:08.721802 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:08.724882 systemd-logind[1872]: Session 16 logged out. Waiting for processes to exit. Dec 16 12:32:08.725146 systemd[1]: sshd@13-10.200.20.38:22-10.200.16.10:36174.service: Deactivated successfully. Dec 16 12:32:08.726847 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 12:32:08.729522 systemd-logind[1872]: Removed session 16. Dec 16 12:32:08.732216 containerd[1906]: time="2025-12-16T12:32:08.732041772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:08.744519 containerd[1906]: time="2025-12-16T12:32:08.744394315Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 16 12:32:08.744519 containerd[1906]: time="2025-12-16T12:32:08.744490686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 16 12:32:08.744663 kubelet[3427]: E1216 12:32:08.744620 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:32:08.744950 kubelet[3427]: E1216 12:32:08.744669 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 16 12:32:08.744950 kubelet[3427]: E1216 12:32:08.744776 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6lgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gmt2x_calico-system(5d133913-fb0a-455b-afab-c0825f0f11d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:08.746979 containerd[1906]: time="2025-12-16T12:32:08.746875045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 16 12:32:09.025486 containerd[1906]: time="2025-12-16T12:32:09.025237997Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:09.037098 containerd[1906]: time="2025-12-16T12:32:09.036958972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 16 12:32:09.037098 containerd[1906]: time="2025-12-16T12:32:09.037061278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 16 12:32:09.037307 kubelet[3427]: E1216 12:32:09.037235 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:32:09.037307 kubelet[3427]: E1216 12:32:09.037296 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 16 12:32:09.037655 kubelet[3427]: E1216 12:32:09.037599 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6lgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-gmt2x_calico-system(5d133913-fb0a-455b-afab-c0825f0f11d8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:09.038816 kubelet[3427]: E1216 12:32:09.038768 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:32:13.809300 systemd[1]: Started sshd@14-10.200.20.38:22-10.200.16.10:51856.service - OpenSSH per-connection server daemon (10.200.16.10:51856). Dec 16 12:32:14.301180 sshd[5704]: Accepted publickey for core from 10.200.16.10 port 51856 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:14.301985 sshd-session[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:14.307233 systemd-logind[1872]: New session 17 of user core. Dec 16 12:32:14.313506 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 12:32:14.476953 containerd[1906]: time="2025-12-16T12:32:14.476289657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 16 12:32:14.726607 sshd[5707]: Connection closed by 10.200.16.10 port 51856 Dec 16 12:32:14.726988 sshd-session[5704]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:14.732547 systemd[1]: sshd@14-10.200.20.38:22-10.200.16.10:51856.service: Deactivated successfully. Dec 16 12:32:14.732677 systemd-logind[1872]: Session 17 logged out. Waiting for processes to exit. Dec 16 12:32:14.736575 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 12:32:14.738966 systemd-logind[1872]: Removed session 17. Dec 16 12:32:14.796449 containerd[1906]: time="2025-12-16T12:32:14.796259195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:14.800173 containerd[1906]: time="2025-12-16T12:32:14.800014921Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 16 12:32:14.800173 containerd[1906]: time="2025-12-16T12:32:14.800025841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 16 12:32:14.800547 kubelet[3427]: E1216 12:32:14.800481 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:32:14.800547 kubelet[3427]: E1216 12:32:14.800540 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 16 12:32:14.801109 kubelet[3427]: E1216 12:32:14.800661 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xqnqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-587446bcc5-m2zzh_calico-apiserver(042d90c0-42d8-409c-add9-7c678aa9ba3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:14.802256 kubelet[3427]: E1216 12:32:14.802208 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" podUID="042d90c0-42d8-409c-add9-7c678aa9ba3e" Dec 16 12:32:14.812359 systemd[1]: Started sshd@15-10.200.20.38:22-10.200.16.10:51872.service - OpenSSH per-connection server daemon (10.200.16.10:51872). Dec 16 12:32:15.269259 sshd[5719]: Accepted publickey for core from 10.200.16.10 port 51872 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:15.270424 sshd-session[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:15.274470 systemd-logind[1872]: New session 18 of user core. Dec 16 12:32:15.283314 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 12:32:15.738897 sshd[5722]: Connection closed by 10.200.16.10 port 51872 Dec 16 12:32:15.739281 sshd-session[5719]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:15.743696 systemd[1]: sshd@15-10.200.20.38:22-10.200.16.10:51872.service: Deactivated successfully. Dec 16 12:32:15.745338 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 12:32:15.745982 systemd-logind[1872]: Session 18 logged out. Waiting for processes to exit. Dec 16 12:32:15.747712 systemd-logind[1872]: Removed session 18. Dec 16 12:32:15.823553 systemd[1]: Started sshd@16-10.200.20.38:22-10.200.16.10:51886.service - OpenSSH per-connection server daemon (10.200.16.10:51886). Dec 16 12:32:16.292901 sshd[5732]: Accepted publickey for core from 10.200.16.10 port 51886 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:16.294806 sshd-session[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:16.301590 systemd-logind[1872]: New session 19 of user core. Dec 16 12:32:16.308542 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 12:32:16.477259 containerd[1906]: time="2025-12-16T12:32:16.476931604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 16 12:32:16.736678 containerd[1906]: time="2025-12-16T12:32:16.736607033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:16.740607 containerd[1906]: time="2025-12-16T12:32:16.740562636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 16 12:32:16.740776 containerd[1906]: time="2025-12-16T12:32:16.740575485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 16 12:32:16.740826 kubelet[3427]: E1216 12:32:16.740784 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:32:16.741129 kubelet[3427]: E1216 12:32:16.740839 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 16 12:32:16.741129 kubelet[3427]: E1216 12:32:16.741027 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2c5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cd89997db-lnfvm_calico-system(d50bef5a-6902-49f4-92df-f935afcbb9ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:16.741678 containerd[1906]: time="2025-12-16T12:32:16.741624497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 16 12:32:16.742694 kubelet[3427]: E1216 12:32:16.742658 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" podUID="d50bef5a-6902-49f4-92df-f935afcbb9ff" Dec 16 12:32:16.999263 containerd[1906]: time="2025-12-16T12:32:16.998845276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 16 12:32:17.004734 containerd[1906]: time="2025-12-16T12:32:17.004609960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 16 12:32:17.004734 containerd[1906]: time="2025-12-16T12:32:17.004707747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 16 12:32:17.005101 kubelet[3427]: E1216 12:32:17.004995 3427 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:32:17.005303 kubelet[3427]: E1216 12:32:17.005270 3427 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 16 12:32:17.006296 kubelet[3427]: E1216 12:32:17.006203 3427 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wv688,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dcszz_calico-system(b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 16 12:32:17.007379 kubelet[3427]: E1216 12:32:17.007347 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dcszz" podUID="b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb" Dec 16 12:32:17.108948 sshd[5735]: Connection closed by 10.200.16.10 port 51886 Dec 16 12:32:17.110386 sshd-session[5732]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:17.113814 systemd[1]: sshd@16-10.200.20.38:22-10.200.16.10:51886.service: Deactivated successfully. Dec 16 12:32:17.117093 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 12:32:17.118128 systemd-logind[1872]: Session 19 logged out. Waiting for processes to exit. Dec 16 12:32:17.120810 systemd-logind[1872]: Removed session 19. Dec 16 12:32:17.197473 systemd[1]: Started sshd@17-10.200.20.38:22-10.200.16.10:51890.service - OpenSSH per-connection server daemon (10.200.16.10:51890). Dec 16 12:32:17.698664 sshd[5756]: Accepted publickey for core from 10.200.16.10 port 51890 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:17.701704 sshd-session[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:17.709211 systemd-logind[1872]: New session 20 of user core. Dec 16 12:32:17.714999 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 12:32:18.189233 sshd[5759]: Connection closed by 10.200.16.10 port 51890 Dec 16 12:32:18.189846 sshd-session[5756]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:18.193571 systemd[1]: sshd@17-10.200.20.38:22-10.200.16.10:51890.service: Deactivated successfully. Dec 16 12:32:18.195814 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 12:32:18.196874 systemd-logind[1872]: Session 20 logged out. Waiting for processes to exit. Dec 16 12:32:18.199557 systemd-logind[1872]: Removed session 20. Dec 16 12:32:18.279680 systemd[1]: Started sshd@18-10.200.20.38:22-10.200.16.10:51906.service - OpenSSH per-connection server daemon (10.200.16.10:51906). Dec 16 12:32:18.477372 kubelet[3427]: E1216 12:32:18.476679 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" podUID="1831bbdd-6642-47ec-b6ce-03f07d23d2da" Dec 16 12:32:18.773565 sshd[5769]: Accepted publickey for core from 10.200.16.10 port 51906 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:18.774670 sshd-session[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:18.779219 systemd-logind[1872]: New session 21 of user core. Dec 16 12:32:18.784291 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 12:32:19.214293 sshd[5772]: Connection closed by 10.200.16.10 port 51906 Dec 16 12:32:19.228482 sshd-session[5769]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:19.233240 systemd[1]: sshd@18-10.200.20.38:22-10.200.16.10:51906.service: Deactivated successfully. Dec 16 12:32:19.237735 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 12:32:19.238984 systemd-logind[1872]: Session 21 logged out. Waiting for processes to exit. Dec 16 12:32:19.240392 systemd-logind[1872]: Removed session 21. Dec 16 12:32:20.479334 kubelet[3427]: E1216 12:32:20.478906 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65489c7c88-dxnd9" podUID="05c07d81-bd4b-4334-984c-fd96a5a648fa" Dec 16 12:32:21.477328 kubelet[3427]: E1216 12:32:21.477277 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:32:24.304198 systemd[1]: Started sshd@19-10.200.20.38:22-10.200.16.10:50646.service - OpenSSH per-connection server daemon (10.200.16.10:50646). Dec 16 12:32:24.798442 sshd[5788]: Accepted publickey for core from 10.200.16.10 port 50646 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:24.799697 sshd-session[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:24.803961 systemd-logind[1872]: New session 22 of user core. Dec 16 12:32:24.812521 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 12:32:25.231433 sshd[5791]: Connection closed by 10.200.16.10 port 50646 Dec 16 12:32:25.232422 sshd-session[5788]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:25.236370 systemd-logind[1872]: Session 22 logged out. Waiting for processes to exit. Dec 16 12:32:25.236522 systemd[1]: sshd@19-10.200.20.38:22-10.200.16.10:50646.service: Deactivated successfully. Dec 16 12:32:25.238116 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 12:32:25.239740 systemd-logind[1872]: Removed session 22. Dec 16 12:32:27.475726 kubelet[3427]: E1216 12:32:27.475656 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" podUID="042d90c0-42d8-409c-add9-7c678aa9ba3e" Dec 16 12:32:28.479250 kubelet[3427]: E1216 12:32:28.479198 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" podUID="d50bef5a-6902-49f4-92df-f935afcbb9ff" Dec 16 12:32:30.321354 systemd[1]: Started sshd@20-10.200.20.38:22-10.200.16.10:50572.service - OpenSSH per-connection server daemon (10.200.16.10:50572). Dec 16 12:32:30.815536 sshd[5803]: Accepted publickey for core from 10.200.16.10 port 50572 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:30.817914 sshd-session[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:30.824987 systemd-logind[1872]: New session 23 of user core. Dec 16 12:32:30.830291 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 16 12:32:31.209194 sshd[5806]: Connection closed by 10.200.16.10 port 50572 Dec 16 12:32:31.209764 sshd-session[5803]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:31.214474 systemd-logind[1872]: Session 23 logged out. Waiting for processes to exit. Dec 16 12:32:31.214638 systemd[1]: sshd@20-10.200.20.38:22-10.200.16.10:50572.service: Deactivated successfully. Dec 16 12:32:31.217942 systemd[1]: session-23.scope: Deactivated successfully. Dec 16 12:32:31.219875 systemd-logind[1872]: Removed session 23. Dec 16 12:32:31.477010 kubelet[3427]: E1216 12:32:31.476612 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" podUID="1831bbdd-6642-47ec-b6ce-03f07d23d2da" Dec 16 12:32:32.476748 kubelet[3427]: E1216 12:32:32.476432 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dcszz" podUID="b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb" Dec 16 12:32:33.476744 kubelet[3427]: E1216 12:32:33.476697 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65489c7c88-dxnd9" podUID="05c07d81-bd4b-4334-984c-fd96a5a648fa" Dec 16 12:32:35.476819 kubelet[3427]: E1216 12:32:35.476701 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:32:36.298197 systemd[1]: Started sshd@21-10.200.20.38:22-10.200.16.10:50586.service - OpenSSH per-connection server daemon (10.200.16.10:50586). Dec 16 12:32:36.788523 sshd[5842]: Accepted publickey for core from 10.200.16.10 port 50586 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:36.790672 sshd-session[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:36.796238 systemd-logind[1872]: New session 24 of user core. Dec 16 12:32:36.802377 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 16 12:32:37.201166 sshd[5845]: Connection closed by 10.200.16.10 port 50586 Dec 16 12:32:37.202700 sshd-session[5842]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:37.206587 systemd[1]: sshd@21-10.200.20.38:22-10.200.16.10:50586.service: Deactivated successfully. Dec 16 12:32:37.210365 systemd[1]: session-24.scope: Deactivated successfully. Dec 16 12:32:37.211550 systemd-logind[1872]: Session 24 logged out. Waiting for processes to exit. Dec 16 12:32:37.213492 systemd-logind[1872]: Removed session 24. Dec 16 12:32:40.477306 kubelet[3427]: E1216 12:32:40.477252 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cd89997db-lnfvm" podUID="d50bef5a-6902-49f4-92df-f935afcbb9ff" Dec 16 12:32:41.476679 kubelet[3427]: E1216 12:32:41.476375 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-m2zzh" podUID="042d90c0-42d8-409c-add9-7c678aa9ba3e" Dec 16 12:32:42.295758 systemd[1]: Started sshd@22-10.200.20.38:22-10.200.16.10:37324.service - OpenSSH per-connection server daemon (10.200.16.10:37324). Dec 16 12:32:42.796490 sshd[5857]: Accepted publickey for core from 10.200.16.10 port 37324 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:42.797657 sshd-session[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:42.801520 systemd-logind[1872]: New session 25 of user core. Dec 16 12:32:42.806297 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 16 12:32:43.213809 sshd[5860]: Connection closed by 10.200.16.10 port 37324 Dec 16 12:32:43.215366 sshd-session[5857]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:43.218983 systemd[1]: sshd@22-10.200.20.38:22-10.200.16.10:37324.service: Deactivated successfully. Dec 16 12:32:43.222819 systemd[1]: session-25.scope: Deactivated successfully. Dec 16 12:32:43.224228 systemd-logind[1872]: Session 25 logged out. Waiting for processes to exit. Dec 16 12:32:43.225537 systemd-logind[1872]: Removed session 25. Dec 16 12:32:44.477106 kubelet[3427]: E1216 12:32:44.475840 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-587446bcc5-b8k9n" podUID="1831bbdd-6642-47ec-b6ce-03f07d23d2da" Dec 16 12:32:45.475918 kubelet[3427]: E1216 12:32:45.475785 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dcszz" podUID="b54c9e2a-c2a6-4080-b5bd-3a3aeebd49cb" Dec 16 12:32:47.479192 kubelet[3427]: E1216 12:32:47.477205 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gmt2x" podUID="5d133913-fb0a-455b-afab-c0825f0f11d8" Dec 16 12:32:48.307359 systemd[1]: Started sshd@23-10.200.20.38:22-10.200.16.10:37334.service - OpenSSH per-connection server daemon (10.200.16.10:37334). Dec 16 12:32:48.477287 kubelet[3427]: E1216 12:32:48.477077 3427 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65489c7c88-dxnd9" podUID="05c07d81-bd4b-4334-984c-fd96a5a648fa" Dec 16 12:32:48.797523 sshd[5872]: Accepted publickey for core from 10.200.16.10 port 37334 ssh2: RSA SHA256:0sW83PWlkN2oSGFUMV36+zNC2S3SSsFxfZRU5Tfj1Ag Dec 16 12:32:48.800447 sshd-session[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 12:32:48.805981 systemd-logind[1872]: New session 26 of user core. Dec 16 12:32:48.809274 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 16 12:32:49.188263 sshd[5877]: Connection closed by 10.200.16.10 port 37334 Dec 16 12:32:49.188848 sshd-session[5872]: pam_unix(sshd:session): session closed for user core Dec 16 12:32:49.192944 systemd-logind[1872]: Session 26 logged out. Waiting for processes to exit. Dec 16 12:32:49.193076 systemd[1]: sshd@23-10.200.20.38:22-10.200.16.10:37334.service: Deactivated successfully. Dec 16 12:32:49.196516 systemd[1]: session-26.scope: Deactivated successfully. Dec 16 12:32:49.198364 systemd-logind[1872]: Removed session 26.