Nov 23 23:21:51.083012 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Nov 23 23:21:51.083031 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:53:53 -00 2025 Nov 23 23:21:51.083038 kernel: KASLR enabled Nov 23 23:21:51.083042 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Nov 23 23:21:51.083045 kernel: printk: legacy bootconsole [pl11] enabled Nov 23 23:21:51.083050 kernel: efi: EFI v2.7 by EDK II Nov 23 23:21:51.083055 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Nov 23 23:21:51.083059 kernel: random: crng init done Nov 23 23:21:51.083063 kernel: secureboot: Secure boot disabled Nov 23 23:21:51.083067 kernel: ACPI: Early table checksum verification disabled Nov 23 23:21:51.083071 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Nov 23 23:21:51.083075 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:51.083078 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:51.083082 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 23 23:21:51.083088 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:51.083092 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:51.083097 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:51.083101 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:51.083105 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:51.083110 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:51.083114 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Nov 23 23:21:51.083119 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:51.083123 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Nov 23 23:21:51.083127 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 23:21:51.083131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 23 23:21:51.083135 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Nov 23 23:21:51.083139 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Nov 23 23:21:51.083144 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 23 23:21:51.083148 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 23 23:21:51.083152 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 23 23:21:51.083157 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 23 23:21:51.083161 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 23 23:21:51.083166 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 23 23:21:51.083170 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 23 23:21:51.083174 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 23 23:21:51.083178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 23 23:21:51.083183 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Nov 23 23:21:51.083187 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Nov 23 23:21:51.083191 kernel: Zone ranges: Nov 23 23:21:51.083195 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Nov 23 23:21:51.083202 kernel: DMA32 empty Nov 23 23:21:51.083206 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Nov 23 23:21:51.083211 kernel: Device empty Nov 23 23:21:51.083215 kernel: Movable zone start for each node Nov 23 23:21:51.083220 kernel: Early memory node ranges Nov 23 23:21:51.083224 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Nov 23 23:21:51.083229 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Nov 23 23:21:51.083233 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Nov 23 23:21:51.083238 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Nov 23 23:21:51.083242 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Nov 23 23:21:51.083246 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Nov 23 23:21:51.083251 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Nov 23 23:21:51.083255 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Nov 23 23:21:51.083259 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Nov 23 23:21:51.083264 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Nov 23 23:21:51.083268 kernel: psci: probing for conduit method from ACPI. Nov 23 23:21:51.083272 kernel: psci: PSCIv1.3 detected in firmware. Nov 23 23:21:51.083277 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 23:21:51.083282 kernel: psci: MIGRATE_INFO_TYPE not supported. Nov 23 23:21:51.083286 kernel: psci: SMC Calling Convention v1.4 Nov 23 23:21:51.083291 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Nov 23 23:21:51.083295 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Nov 23 23:21:51.083299 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 23:21:51.083304 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 23:21:51.083308 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 23 23:21:51.083313 kernel: Detected PIPT I-cache on CPU0 Nov 23 23:21:51.083317 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Nov 23 23:21:51.083322 kernel: CPU features: detected: GIC system register CPU interface Nov 23 23:21:51.083326 kernel: CPU features: detected: Spectre-v4 Nov 23 23:21:51.083330 kernel: CPU features: detected: Spectre-BHB Nov 23 23:21:51.083335 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 23 23:21:51.083340 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 23 23:21:51.083344 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Nov 23 23:21:51.083349 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 23 23:21:51.083353 kernel: alternatives: applying boot alternatives Nov 23 23:21:51.083358 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:21:51.083363 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 23:21:51.083368 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 23:21:51.083372 kernel: Fallback order for Node 0: 0 Nov 23 23:21:51.083376 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Nov 23 23:21:51.083381 kernel: Policy zone: Normal Nov 23 23:21:51.083385 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 23:21:51.083390 kernel: software IO TLB: area num 2. Nov 23 23:21:51.083394 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Nov 23 23:21:51.083399 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 23 23:21:51.083403 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 23:21:51.083408 kernel: rcu: RCU event tracing is enabled. Nov 23 23:21:51.083412 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 23 23:21:51.083417 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 23:21:51.083421 kernel: Tracing variant of Tasks RCU enabled. Nov 23 23:21:51.083426 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 23:21:51.083430 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 23 23:21:51.083436 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 23:21:51.083440 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 23:21:51.083444 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 23:21:51.083449 kernel: GICv3: 960 SPIs implemented Nov 23 23:21:51.083453 kernel: GICv3: 0 Extended SPIs implemented Nov 23 23:21:51.083457 kernel: Root IRQ handler: gic_handle_irq Nov 23 23:21:51.083462 kernel: GICv3: GICv3 features: 16 PPIs, RSS Nov 23 23:21:51.083466 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Nov 23 23:21:51.083470 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Nov 23 23:21:51.083475 kernel: ITS: No ITS available, not enabling LPIs Nov 23 23:21:51.083479 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 23:21:51.083484 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Nov 23 23:21:51.083489 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 23 23:21:51.083493 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Nov 23 23:21:51.083498 kernel: Console: colour dummy device 80x25 Nov 23 23:21:51.083503 kernel: printk: legacy console [tty1] enabled Nov 23 23:21:51.083507 kernel: ACPI: Core revision 20240827 Nov 23 23:21:51.083512 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Nov 23 23:21:51.083516 kernel: pid_max: default: 32768 minimum: 301 Nov 23 23:21:51.083521 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 23:21:51.083525 kernel: landlock: Up and running. Nov 23 23:21:51.083531 kernel: SELinux: Initializing. Nov 23 23:21:51.083535 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:21:51.083540 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:21:51.083544 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Nov 23 23:21:51.083549 kernel: Hyper-V: Host Build 10.0.26102.1141-1-0 Nov 23 23:21:51.083556 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 23 23:21:51.083562 kernel: rcu: Hierarchical SRCU implementation. Nov 23 23:21:51.083566 kernel: rcu: Max phase no-delay instances is 400. Nov 23 23:21:51.083571 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 23:21:51.083576 kernel: Remapping and enabling EFI services. Nov 23 23:21:51.083581 kernel: smp: Bringing up secondary CPUs ... Nov 23 23:21:51.083585 kernel: Detected PIPT I-cache on CPU1 Nov 23 23:21:51.083591 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Nov 23 23:21:51.083596 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Nov 23 23:21:51.083600 kernel: smp: Brought up 1 node, 2 CPUs Nov 23 23:21:51.083605 kernel: SMP: Total of 2 processors activated. Nov 23 23:21:51.083609 kernel: CPU: All CPU(s) started at EL1 Nov 23 23:21:51.083615 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 23:21:51.083620 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Nov 23 23:21:51.083625 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 23 23:21:51.083630 kernel: CPU features: detected: Common not Private translations Nov 23 23:21:51.083634 kernel: CPU features: detected: CRC32 instructions Nov 23 23:21:51.083639 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Nov 23 23:21:51.083644 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 23 23:21:51.083649 kernel: CPU features: detected: LSE atomic instructions Nov 23 23:21:51.083653 kernel: CPU features: detected: Privileged Access Never Nov 23 23:21:51.083659 kernel: CPU features: detected: Speculation barrier (SB) Nov 23 23:21:51.083663 kernel: CPU features: detected: TLB range maintenance instructions Nov 23 23:21:51.083668 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 23 23:21:51.083673 kernel: CPU features: detected: Scalable Vector Extension Nov 23 23:21:51.083677 kernel: alternatives: applying system-wide alternatives Nov 23 23:21:51.083682 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Nov 23 23:21:51.083687 kernel: SVE: maximum available vector length 16 bytes per vector Nov 23 23:21:51.083692 kernel: SVE: default vector length 16 bytes per vector Nov 23 23:21:51.083697 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Nov 23 23:21:51.083702 kernel: devtmpfs: initialized Nov 23 23:21:51.083707 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 23:21:51.083712 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 23 23:21:51.083716 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 23 23:21:51.083721 kernel: 0 pages in range for non-PLT usage Nov 23 23:21:51.083726 kernel: 508400 pages in range for PLT usage Nov 23 23:21:51.083731 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 23:21:51.083735 kernel: SMBIOS 3.1.0 present. Nov 23 23:21:51.083741 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Nov 23 23:21:51.083746 kernel: DMI: Memory slots populated: 2/2 Nov 23 23:21:51.088898 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 23:21:51.088919 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 23:21:51.088925 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 23:21:51.088931 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 23:21:51.088936 kernel: audit: initializing netlink subsys (disabled) Nov 23 23:21:51.088941 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Nov 23 23:21:51.088945 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 23:21:51.088956 kernel: cpuidle: using governor menu Nov 23 23:21:51.088961 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 23:21:51.088965 kernel: ASID allocator initialised with 32768 entries Nov 23 23:21:51.088970 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 23:21:51.088975 kernel: Serial: AMBA PL011 UART driver Nov 23 23:21:51.088980 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 23:21:51.088984 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 23:21:51.088989 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 23:21:51.088994 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 23:21:51.089000 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 23:21:51.089005 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 23:21:51.089010 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 23:21:51.089014 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 23:21:51.089019 kernel: ACPI: Added _OSI(Module Device) Nov 23 23:21:51.089024 kernel: ACPI: Added _OSI(Processor Device) Nov 23 23:21:51.089028 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 23:21:51.089033 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 23:21:51.089038 kernel: ACPI: Interpreter enabled Nov 23 23:21:51.089044 kernel: ACPI: Using GIC for interrupt routing Nov 23 23:21:51.089048 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Nov 23 23:21:51.089053 kernel: printk: legacy console [ttyAMA0] enabled Nov 23 23:21:51.089058 kernel: printk: legacy bootconsole [pl11] disabled Nov 23 23:21:51.089063 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Nov 23 23:21:51.089068 kernel: ACPI: CPU0 has been hot-added Nov 23 23:21:51.089072 kernel: ACPI: CPU1 has been hot-added Nov 23 23:21:51.089077 kernel: iommu: Default domain type: Translated Nov 23 23:21:51.089082 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 23:21:51.089087 kernel: efivars: Registered efivars operations Nov 23 23:21:51.089092 kernel: vgaarb: loaded Nov 23 23:21:51.089097 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 23:21:51.089102 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 23:21:51.089106 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 23:21:51.089111 kernel: pnp: PnP ACPI init Nov 23 23:21:51.089116 kernel: pnp: PnP ACPI: found 0 devices Nov 23 23:21:51.089120 kernel: NET: Registered PF_INET protocol family Nov 23 23:21:51.089125 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 23:21:51.089130 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 23:21:51.089136 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 23:21:51.089141 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 23:21:51.089146 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 23:21:51.089150 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 23:21:51.089155 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:21:51.089160 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:21:51.089165 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 23:21:51.089170 kernel: PCI: CLS 0 bytes, default 64 Nov 23 23:21:51.089174 kernel: kvm [1]: HYP mode not available Nov 23 23:21:51.089180 kernel: Initialise system trusted keyrings Nov 23 23:21:51.089184 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 23:21:51.089189 kernel: Key type asymmetric registered Nov 23 23:21:51.089194 kernel: Asymmetric key parser 'x509' registered Nov 23 23:21:51.089199 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 23:21:51.089203 kernel: io scheduler mq-deadline registered Nov 23 23:21:51.089208 kernel: io scheduler kyber registered Nov 23 23:21:51.089213 kernel: io scheduler bfq registered Nov 23 23:21:51.089218 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 23:21:51.089223 kernel: thunder_xcv, ver 1.0 Nov 23 23:21:51.089228 kernel: thunder_bgx, ver 1.0 Nov 23 23:21:51.089232 kernel: nicpf, ver 1.0 Nov 23 23:21:51.089237 kernel: nicvf, ver 1.0 Nov 23 23:21:51.089351 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 23:21:51.089402 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T23:21:50 UTC (1763940110) Nov 23 23:21:51.089409 kernel: efifb: probing for efifb Nov 23 23:21:51.089415 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 23 23:21:51.089420 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 23 23:21:51.089424 kernel: efifb: scrolling: redraw Nov 23 23:21:51.089429 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 23 23:21:51.089434 kernel: Console: switching to colour frame buffer device 128x48 Nov 23 23:21:51.089439 kernel: fb0: EFI VGA frame buffer device Nov 23 23:21:51.089444 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Nov 23 23:21:51.089448 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 23:21:51.089453 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 23 23:21:51.089459 kernel: watchdog: NMI not fully supported Nov 23 23:21:51.089464 kernel: watchdog: Hard watchdog permanently disabled Nov 23 23:21:51.089468 kernel: NET: Registered PF_INET6 protocol family Nov 23 23:21:51.089473 kernel: Segment Routing with IPv6 Nov 23 23:21:51.089478 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 23:21:51.089483 kernel: NET: Registered PF_PACKET protocol family Nov 23 23:21:51.089488 kernel: Key type dns_resolver registered Nov 23 23:21:51.089492 kernel: registered taskstats version 1 Nov 23 23:21:51.089497 kernel: Loading compiled-in X.509 certificates Nov 23 23:21:51.089502 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 00c36da29593053a7da9cd3c5945ae69451ce339' Nov 23 23:21:51.089507 kernel: Demotion targets for Node 0: null Nov 23 23:21:51.089512 kernel: Key type .fscrypt registered Nov 23 23:21:51.089517 kernel: Key type fscrypt-provisioning registered Nov 23 23:21:51.089522 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 23:21:51.089526 kernel: ima: Allocated hash algorithm: sha1 Nov 23 23:21:51.089531 kernel: ima: No architecture policies found Nov 23 23:21:51.089536 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 23:21:51.089540 kernel: clk: Disabling unused clocks Nov 23 23:21:51.089545 kernel: PM: genpd: Disabling unused power domains Nov 23 23:21:51.089551 kernel: Warning: unable to open an initial console. Nov 23 23:21:51.089556 kernel: Freeing unused kernel memory: 39552K Nov 23 23:21:51.089560 kernel: Run /init as init process Nov 23 23:21:51.089565 kernel: with arguments: Nov 23 23:21:51.089570 kernel: /init Nov 23 23:21:51.089574 kernel: with environment: Nov 23 23:21:51.089579 kernel: HOME=/ Nov 23 23:21:51.089583 kernel: TERM=linux Nov 23 23:21:51.089589 systemd[1]: Successfully made /usr/ read-only. Nov 23 23:21:51.089597 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:21:51.089602 systemd[1]: Detected virtualization microsoft. Nov 23 23:21:51.089607 systemd[1]: Detected architecture arm64. Nov 23 23:21:51.089612 systemd[1]: Running in initrd. Nov 23 23:21:51.089617 systemd[1]: No hostname configured, using default hostname. Nov 23 23:21:51.089622 systemd[1]: Hostname set to . Nov 23 23:21:51.089628 systemd[1]: Initializing machine ID from random generator. Nov 23 23:21:51.089633 systemd[1]: Queued start job for default target initrd.target. Nov 23 23:21:51.089639 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:21:51.089644 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:21:51.089650 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 23:21:51.089655 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:21:51.089660 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 23:21:51.089666 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 23:21:51.089673 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 23:21:51.089678 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 23:21:51.089683 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:21:51.089688 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:21:51.089694 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:21:51.089699 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:21:51.089704 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:21:51.089709 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:21:51.089715 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:21:51.089720 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:21:51.089725 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 23:21:51.089731 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 23:21:51.089736 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:21:51.089741 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:21:51.089746 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:21:51.089769 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:21:51.089775 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 23:21:51.089782 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:21:51.089787 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 23:21:51.089793 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 23:21:51.089798 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 23:21:51.089803 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:21:51.089809 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:21:51.089828 systemd-journald[225]: Collecting audit messages is disabled. Nov 23 23:21:51.089842 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:21:51.089848 systemd-journald[225]: Journal started Nov 23 23:21:51.089863 systemd-journald[225]: Runtime Journal (/run/log/journal/763008a6c0aa4d3f882bbe7aef0294c8) is 8M, max 78.3M, 70.3M free. Nov 23 23:21:51.090134 systemd-modules-load[227]: Inserted module 'overlay' Nov 23 23:21:51.102972 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:21:51.115809 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 23:21:51.127568 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 23:21:51.127583 kernel: Bridge firewalling registered Nov 23 23:21:51.120899 systemd-modules-load[227]: Inserted module 'br_netfilter' Nov 23 23:21:51.123891 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:21:51.132599 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 23:21:51.139217 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:21:51.148287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:21:51.154881 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 23:21:51.177867 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:21:51.184857 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:21:51.202571 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:21:51.208301 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:21:51.219730 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:21:51.231677 systemd-tmpfiles[258]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 23:21:51.231767 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:21:51.247166 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:21:51.259140 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 23:21:51.278853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:21:51.284212 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:21:51.299960 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:21:51.330780 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:21:51.354826 systemd-resolved[264]: Positive Trust Anchors: Nov 23 23:21:51.357948 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:21:51.371850 kernel: SCSI subsystem initialized Nov 23 23:21:51.371869 kernel: Loading iSCSI transport class v2.0-870. Nov 23 23:21:51.357970 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:21:51.359642 systemd-resolved[264]: Defaulting to hostname 'linux'. Nov 23 23:21:51.360256 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:21:51.418269 kernel: iscsi: registered transport (tcp) Nov 23 23:21:51.401692 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:21:51.429612 kernel: iscsi: registered transport (qla4xxx) Nov 23 23:21:51.429623 kernel: QLogic iSCSI HBA Driver Nov 23 23:21:51.442079 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:21:51.456717 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:21:51.468312 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:21:51.508561 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 23:21:51.513829 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 23:21:51.573770 kernel: raid6: neonx8 gen() 18533 MB/s Nov 23 23:21:51.592761 kernel: raid6: neonx4 gen() 18563 MB/s Nov 23 23:21:51.612761 kernel: raid6: neonx2 gen() 17075 MB/s Nov 23 23:21:51.631761 kernel: raid6: neonx1 gen() 15062 MB/s Nov 23 23:21:51.650761 kernel: raid6: int64x8 gen() 10549 MB/s Nov 23 23:21:51.670857 kernel: raid6: int64x4 gen() 10612 MB/s Nov 23 23:21:51.689762 kernel: raid6: int64x2 gen() 9000 MB/s Nov 23 23:21:51.711021 kernel: raid6: int64x1 gen() 7031 MB/s Nov 23 23:21:51.711031 kernel: raid6: using algorithm neonx4 gen() 18563 MB/s Nov 23 23:21:51.733865 kernel: raid6: .... xor() 15120 MB/s, rmw enabled Nov 23 23:21:51.733874 kernel: raid6: using neon recovery algorithm Nov 23 23:21:51.741919 kernel: xor: measuring software checksum speed Nov 23 23:21:51.741925 kernel: 8regs : 28635 MB/sec Nov 23 23:21:51.744470 kernel: 32regs : 28818 MB/sec Nov 23 23:21:51.747552 kernel: arm64_neon : 37635 MB/sec Nov 23 23:21:51.750809 kernel: xor: using function: arm64_neon (37635 MB/sec) Nov 23 23:21:51.788789 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 23:21:51.794147 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:21:51.804292 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:21:51.834122 systemd-udevd[475]: Using default interface naming scheme 'v255'. Nov 23 23:21:51.837990 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:21:51.849364 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 23:21:51.884356 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Nov 23 23:21:51.911700 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:21:51.918049 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:21:51.960523 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:21:51.970328 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 23:21:52.030776 kernel: hv_vmbus: Vmbus version:5.3 Nov 23 23:21:52.039055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:21:52.085631 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 23 23:21:52.085646 kernel: hv_vmbus: registering driver hid_hyperv Nov 23 23:21:52.085653 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 23 23:21:52.085660 kernel: hv_vmbus: registering driver hv_netvsc Nov 23 23:21:52.085672 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 23 23:21:52.085679 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 23 23:21:52.085687 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 23 23:21:52.085693 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 23 23:21:52.039172 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:21:52.055084 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:21:52.105894 kernel: PTP clock support registered Nov 23 23:21:52.065488 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:21:52.090189 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:21:52.138476 kernel: hv_vmbus: registering driver hv_storvsc Nov 23 23:21:52.138494 kernel: scsi host0: storvsc_host_t Nov 23 23:21:52.138609 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 23 23:21:52.138685 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 23 23:21:52.138772 kernel: scsi host1: storvsc_host_t Nov 23 23:21:52.093356 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:21:52.102211 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:21:52.104834 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:21:52.162059 kernel: hv_utils: Registering HyperV Utility Driver Nov 23 23:21:52.162089 kernel: hv_vmbus: registering driver hv_utils Nov 23 23:21:52.169878 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 23 23:21:52.170013 kernel: hv_utils: Heartbeat IC version 3.0 Nov 23 23:21:52.177097 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 23 23:21:52.177223 kernel: hv_netvsc 002248b7-0d9b-0022-48b7-0d9b002248b7 eth0: VF slot 1 added Nov 23 23:21:52.183840 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 23 23:21:52.183946 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 23 23:21:52.189358 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 23 23:21:52.189489 kernel: hv_utils: Shutdown IC version 3.2 Nov 23 23:21:52.192128 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 23 23:21:52.192240 kernel: hv_utils: TimeSync IC version 4.0 Nov 23 23:21:52.510633 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 23 23:21:52.514371 systemd-resolved[264]: Clock change detected. Flushing caches. Nov 23 23:21:52.529367 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 23 23:21:52.529485 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#129 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 23 23:21:52.529556 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 23 23:21:52.520268 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:21:52.547852 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:21:52.547878 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 23 23:21:52.559712 kernel: hv_vmbus: registering driver hv_pci Nov 23 23:21:52.559741 kernel: hv_pci 46e89e04-43f7-4387-ab61-30426cdb6c76: PCI VMBus probing: Using version 0x10004 Nov 23 23:21:52.570321 kernel: hv_pci 46e89e04-43f7-4387-ab61-30426cdb6c76: PCI host bridge to bus 43f7:00 Nov 23 23:21:52.570443 kernel: pci_bus 43f7:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Nov 23 23:21:52.570532 kernel: pci_bus 43f7:00: No busn resource found for root bus, will use [bus 00-ff] Nov 23 23:21:52.580470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#166 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 23 23:21:52.583260 kernel: pci 43f7:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Nov 23 23:21:52.592405 kernel: pci 43f7:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 23 23:21:52.597308 kernel: pci 43f7:00:02.0: enabling Extended Tags Nov 23 23:21:52.611275 kernel: pci 43f7:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 43f7:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Nov 23 23:21:52.611313 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#177 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 23 23:21:52.620565 kernel: pci_bus 43f7:00: busn_res: [bus 00-ff] end is updated to 00 Nov 23 23:21:52.625281 kernel: pci 43f7:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Nov 23 23:21:52.681325 kernel: mlx5_core 43f7:00:02.0: enabling device (0000 -> 0002) Nov 23 23:21:52.689320 kernel: mlx5_core 43f7:00:02.0: PTM is not supported by PCIe Nov 23 23:21:52.689450 kernel: mlx5_core 43f7:00:02.0: firmware version: 16.30.5006 Nov 23 23:21:52.856815 kernel: hv_netvsc 002248b7-0d9b-0022-48b7-0d9b002248b7 eth0: VF registering: eth1 Nov 23 23:21:52.856980 kernel: mlx5_core 43f7:00:02.0 eth1: joined to eth0 Nov 23 23:21:52.862254 kernel: mlx5_core 43f7:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Nov 23 23:21:52.871257 kernel: mlx5_core 43f7:00:02.0 enP17399s1: renamed from eth1 Nov 23 23:21:53.026138 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 23 23:21:53.132642 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 23 23:21:53.182671 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 23 23:21:53.188616 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 23 23:21:53.205608 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 23 23:21:53.222265 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 23:21:53.227576 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:21:53.236035 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:21:53.245407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:21:53.254553 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 23:21:53.279845 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 23:21:53.298274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#139 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 23 23:21:53.300468 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:21:53.314451 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:21:53.322256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 23 23:21:53.332383 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:21:54.339746 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#136 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 23 23:21:54.352289 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:21:54.353291 disk-uuid[655]: The operation has completed successfully. Nov 23 23:21:54.417195 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 23:21:54.417291 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 23:21:54.446838 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 23:21:54.464237 sh[820]: Success Nov 23 23:21:54.498261 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 23:21:54.498308 kernel: device-mapper: uevent: version 1.0.3 Nov 23 23:21:54.502971 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 23:21:54.512256 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 23:21:54.770619 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 23:21:54.777567 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 23:21:54.795488 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 23:21:54.817893 kernel: BTRFS: device fsid 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (838) Nov 23 23:21:54.817918 kernel: BTRFS info (device dm-0): first mount of filesystem 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 Nov 23 23:21:54.822272 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:21:55.098288 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 23:21:55.098368 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 23:21:55.133797 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 23:21:55.137468 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:21:55.145066 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 23:21:55.147365 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 23:21:55.165950 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 23:21:55.191257 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (861) Nov 23 23:21:55.201254 kernel: BTRFS info (device sda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:21:55.201288 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:21:55.228123 kernel: BTRFS info (device sda6): turning on async discard Nov 23 23:21:55.228168 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 23:21:55.237409 kernel: BTRFS info (device sda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:21:55.237736 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 23:21:55.247907 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 23:21:55.293414 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:21:55.304835 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:21:55.336778 systemd-networkd[1007]: lo: Link UP Nov 23 23:21:55.336789 systemd-networkd[1007]: lo: Gained carrier Nov 23 23:21:55.337621 systemd-networkd[1007]: Enumeration completed Nov 23 23:21:55.339413 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:21:55.339721 systemd-networkd[1007]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:21:55.339724 systemd-networkd[1007]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:21:55.344332 systemd[1]: Reached target network.target - Network. Nov 23 23:21:55.415257 kernel: mlx5_core 43f7:00:02.0 enP17399s1: Link up Nov 23 23:21:55.445796 systemd-networkd[1007]: enP17399s1: Link UP Nov 23 23:21:55.449130 kernel: hv_netvsc 002248b7-0d9b-0022-48b7-0d9b002248b7 eth0: Data path switched to VF: enP17399s1 Nov 23 23:21:55.445853 systemd-networkd[1007]: eth0: Link UP Nov 23 23:21:55.445934 systemd-networkd[1007]: eth0: Gained carrier Nov 23 23:21:55.445944 systemd-networkd[1007]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:21:55.463522 systemd-networkd[1007]: enP17399s1: Gained carrier Nov 23 23:21:55.491287 systemd-networkd[1007]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 23 23:21:56.417965 ignition[948]: Ignition 2.22.0 Nov 23 23:21:56.420377 ignition[948]: Stage: fetch-offline Nov 23 23:21:56.420479 ignition[948]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:21:56.426893 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:21:56.420485 ignition[948]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:21:56.420545 ignition[948]: parsed url from cmdline: "" Nov 23 23:21:56.437572 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 23 23:21:56.420547 ignition[948]: no config URL provided Nov 23 23:21:56.420550 ignition[948]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:21:56.420554 ignition[948]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:21:56.420558 ignition[948]: failed to fetch config: resource requires networking Nov 23 23:21:56.420672 ignition[948]: Ignition finished successfully Nov 23 23:21:56.466116 ignition[1018]: Ignition 2.22.0 Nov 23 23:21:56.466120 ignition[1018]: Stage: fetch Nov 23 23:21:56.466335 ignition[1018]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:21:56.466342 ignition[1018]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:21:56.466405 ignition[1018]: parsed url from cmdline: "" Nov 23 23:21:56.466408 ignition[1018]: no config URL provided Nov 23 23:21:56.466412 ignition[1018]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:21:56.466419 ignition[1018]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:21:56.466433 ignition[1018]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 23 23:21:56.587189 ignition[1018]: GET result: OK Nov 23 23:21:56.589588 ignition[1018]: config has been read from IMDS userdata Nov 23 23:21:56.589610 ignition[1018]: parsing config with SHA512: f65761bb60648ff2043460b5839bda004de2006d1fef0c05aee4c1fbbc205456abd5a7571413334cf3e045b92d7b4b316f356c12033bb131681152e5b189fc9d Nov 23 23:21:56.592425 unknown[1018]: fetched base config from "system" Nov 23 23:21:56.592705 ignition[1018]: fetch: fetch complete Nov 23 23:21:56.592430 unknown[1018]: fetched base config from "system" Nov 23 23:21:56.592708 ignition[1018]: fetch: fetch passed Nov 23 23:21:56.592433 unknown[1018]: fetched user config from "azure" Nov 23 23:21:56.592742 ignition[1018]: Ignition finished successfully Nov 23 23:21:56.596420 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 23 23:21:56.602535 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 23:21:56.640536 ignition[1024]: Ignition 2.22.0 Nov 23 23:21:56.640545 ignition[1024]: Stage: kargs Nov 23 23:21:56.644105 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 23:21:56.640678 ignition[1024]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:21:56.650937 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 23:21:56.640684 ignition[1024]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:21:56.641150 ignition[1024]: kargs: kargs passed Nov 23 23:21:56.641180 ignition[1024]: Ignition finished successfully Nov 23 23:21:56.685280 ignition[1030]: Ignition 2.22.0 Nov 23 23:21:56.685288 ignition[1030]: Stage: disks Nov 23 23:21:56.688951 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 23:21:56.685431 ignition[1030]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:21:56.695264 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 23:21:56.685437 ignition[1030]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:21:56.703400 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 23:21:56.685980 ignition[1030]: disks: disks passed Nov 23 23:21:56.711715 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:21:56.686014 ignition[1030]: Ignition finished successfully Nov 23 23:21:56.719851 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:21:56.728174 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:21:56.737042 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 23:21:56.814378 systemd-fsck[1038]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Nov 23 23:21:56.817840 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 23:21:56.825106 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 23:21:57.071267 kernel: EXT4-fs (sda9): mounted filesystem fa3f8731-d4e3-4e51-b6db-fa404206cf07 r/w with ordered data mode. Quota mode: none. Nov 23 23:21:57.071801 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 23:21:57.075222 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 23:21:57.100695 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:21:57.107837 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 23:21:57.121789 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 23 23:21:57.132225 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 23:21:57.132271 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:21:57.138102 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 23:21:57.151134 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 23:21:57.174249 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1052) Nov 23 23:21:57.184169 kernel: BTRFS info (device sda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:21:57.184177 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:21:57.194350 kernel: BTRFS info (device sda6): turning on async discard Nov 23 23:21:57.194379 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 23:21:57.195676 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:21:57.274338 systemd-networkd[1007]: eth0: Gained IPv6LL Nov 23 23:21:57.734613 coreos-metadata[1054]: Nov 23 23:21:57.734 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 23 23:21:57.740967 coreos-metadata[1054]: Nov 23 23:21:57.740 INFO Fetch successful Nov 23 23:21:57.744833 coreos-metadata[1054]: Nov 23 23:21:57.742 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 23 23:21:57.752905 coreos-metadata[1054]: Nov 23 23:21:57.752 INFO Fetch successful Nov 23 23:21:57.783259 coreos-metadata[1054]: Nov 23 23:21:57.783 INFO wrote hostname ci-4459.2.1-a-856cba2a05 to /sysroot/etc/hostname Nov 23 23:21:57.790293 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 23 23:21:58.030773 initrd-setup-root[1085]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 23:21:58.074468 initrd-setup-root[1092]: cut: /sysroot/etc/group: No such file or directory Nov 23 23:21:58.094694 initrd-setup-root[1099]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 23:21:58.099385 initrd-setup-root[1106]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 23:21:59.131542 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 23:21:59.136929 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 23:21:59.157679 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 23:21:59.167012 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 23:21:59.176280 kernel: BTRFS info (device sda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:21:59.193603 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 23:21:59.201817 ignition[1174]: INFO : Ignition 2.22.0 Nov 23 23:21:59.201817 ignition[1174]: INFO : Stage: mount Nov 23 23:21:59.201817 ignition[1174]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:21:59.201817 ignition[1174]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:21:59.201817 ignition[1174]: INFO : mount: mount passed Nov 23 23:21:59.201817 ignition[1174]: INFO : Ignition finished successfully Nov 23 23:21:59.202275 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 23:21:59.209850 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 23:21:59.235331 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:21:59.260264 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1186) Nov 23 23:21:59.260290 kernel: BTRFS info (device sda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:21:59.269387 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:21:59.278189 kernel: BTRFS info (device sda6): turning on async discard Nov 23 23:21:59.278207 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 23:21:59.279443 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:21:59.309013 ignition[1204]: INFO : Ignition 2.22.0 Nov 23 23:21:59.309013 ignition[1204]: INFO : Stage: files Nov 23 23:21:59.309013 ignition[1204]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:21:59.309013 ignition[1204]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:21:59.309013 ignition[1204]: DEBUG : files: compiled without relabeling support, skipping Nov 23 23:21:59.344005 ignition[1204]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 23:21:59.344005 ignition[1204]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 23:21:59.432820 ignition[1204]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 23:21:59.432820 ignition[1204]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 23:21:59.443305 ignition[1204]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 23:21:59.433058 unknown[1204]: wrote ssh authorized keys file for user: core Nov 23 23:21:59.482915 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 23:21:59.490452 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 23 23:21:59.521277 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 23:21:59.599100 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 23:21:59.606656 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 23:21:59.606656 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 23:21:59.606656 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:21:59.606656 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:21:59.606656 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:21:59.606656 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:21:59.606656 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:21:59.606656 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:21:59.663361 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:21:59.663361 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:21:59.663361 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 23:21:59.663361 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 23:21:59.663361 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 23:21:59.663361 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 23 23:22:00.026641 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 23:22:00.352383 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 23:22:00.352383 ignition[1204]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 23:22:00.368170 ignition[1204]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:22:00.384222 ignition[1204]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:22:00.384222 ignition[1204]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 23:22:00.384222 ignition[1204]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 23 23:22:00.421161 ignition[1204]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 23:22:00.421161 ignition[1204]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:22:00.421161 ignition[1204]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:22:00.421161 ignition[1204]: INFO : files: files passed Nov 23 23:22:00.421161 ignition[1204]: INFO : Ignition finished successfully Nov 23 23:22:00.393517 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 23:22:00.405543 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 23:22:00.441708 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 23:22:00.450220 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 23:22:00.450305 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 23:22:00.481867 initrd-setup-root-after-ignition[1233]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:22:00.481867 initrd-setup-root-after-ignition[1233]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:22:00.496285 initrd-setup-root-after-ignition[1237]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:22:00.489147 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:22:00.500449 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 23:22:00.511292 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 23:22:00.551434 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 23:22:00.551530 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 23:22:00.560671 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 23:22:00.569382 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 23:22:00.577418 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 23:22:00.577934 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 23:22:00.613711 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:22:00.619718 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 23:22:00.641987 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:22:00.646801 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:22:00.655842 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 23:22:00.663835 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 23:22:00.663908 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:22:00.675522 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 23:22:00.679813 systemd[1]: Stopped target basic.target - Basic System. Nov 23 23:22:00.687930 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 23:22:00.696120 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:22:00.704304 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 23:22:00.713036 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:22:00.722058 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 23:22:00.730335 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:22:00.739569 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 23:22:00.747648 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 23:22:00.756382 systemd[1]: Stopped target swap.target - Swaps. Nov 23 23:22:00.763688 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 23:22:00.763777 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:22:00.775083 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:22:00.779988 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:22:00.788683 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 23:22:00.788732 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:22:00.798463 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 23:22:00.798538 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 23:22:00.811336 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 23:22:00.811414 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:22:00.816448 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 23:22:00.816513 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 23:22:00.824255 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 23 23:22:00.824320 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 23 23:22:00.914641 ignition[1257]: INFO : Ignition 2.22.0 Nov 23 23:22:00.914641 ignition[1257]: INFO : Stage: umount Nov 23 23:22:00.914641 ignition[1257]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:22:00.914641 ignition[1257]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:22:00.914641 ignition[1257]: INFO : umount: umount passed Nov 23 23:22:00.914641 ignition[1257]: INFO : Ignition finished successfully Nov 23 23:22:00.835174 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 23:22:00.848253 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 23:22:00.848361 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:22:00.858826 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 23:22:00.864821 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 23:22:00.864923 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:22:00.887012 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 23:22:00.887114 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:22:00.904657 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 23:22:00.904747 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 23:22:00.916141 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 23:22:00.916219 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 23:22:00.921195 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 23:22:00.923130 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 23:22:00.923169 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 23:22:00.932233 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 23:22:00.932288 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 23:22:00.938979 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 23 23:22:00.939008 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 23 23:22:00.946720 systemd[1]: Stopped target network.target - Network. Nov 23 23:22:00.955580 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 23:22:00.955621 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:22:00.965159 systemd[1]: Stopped target paths.target - Path Units. Nov 23 23:22:00.973305 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 23:22:00.977261 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:22:00.982686 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 23:22:00.990543 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 23:22:00.999035 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 23:22:00.999095 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:22:01.006353 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 23:22:01.006384 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:22:01.014316 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 23:22:01.014361 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 23:22:01.022545 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 23:22:01.022571 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 23:22:01.030886 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 23:22:01.038964 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 23:22:01.065892 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 23:22:01.241994 kernel: hv_netvsc 002248b7-0d9b-0022-48b7-0d9b002248b7 eth0: Data path switched from VF: enP17399s1 Nov 23 23:22:01.066011 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 23:22:01.079451 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 23:22:01.079658 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 23:22:01.079743 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 23:22:01.091102 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 23:22:01.091474 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 23:22:01.099520 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 23:22:01.099554 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:22:01.109346 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 23:22:01.116509 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 23:22:01.116554 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:22:01.125956 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 23:22:01.125991 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:22:01.147877 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 23:22:01.147915 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 23:22:01.152363 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 23:22:01.152399 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:22:01.164340 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:22:01.172912 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 23:22:01.172957 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:22:01.196272 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 23:22:01.196415 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:22:01.205106 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 23:22:01.205137 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 23:22:01.213751 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 23:22:01.213774 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:22:01.223741 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 23:22:01.223796 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:22:01.242071 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 23:22:01.242119 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 23:22:01.250744 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 23:22:01.250784 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:22:01.265103 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 23:22:01.280621 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 23:22:01.280676 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:22:01.294121 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 23:22:01.294159 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:22:01.304591 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:22:01.508889 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Nov 23 23:22:01.304634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:22:01.314625 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 23 23:22:01.314663 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 23 23:22:01.314687 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:22:01.314949 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 23:22:01.315030 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 23:22:01.322398 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 23:22:01.322468 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 23:22:01.332369 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 23:22:01.332432 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 23:22:01.341766 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 23:22:01.371154 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 23:22:01.371247 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 23:22:01.380348 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 23:22:01.408902 systemd[1]: Switching root. Nov 23 23:22:01.574271 systemd-journald[225]: Journal stopped Nov 23 23:22:06.123125 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 23:22:06.123144 kernel: SELinux: policy capability open_perms=1 Nov 23 23:22:06.123152 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 23:22:06.123157 kernel: SELinux: policy capability always_check_network=0 Nov 23 23:22:06.123164 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 23:22:06.123170 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 23:22:06.123176 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 23:22:06.123181 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 23:22:06.123186 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 23:22:06.123191 kernel: audit: type=1403 audit(1763940122.529:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 23:22:06.123198 systemd[1]: Successfully loaded SELinux policy in 139.714ms. Nov 23 23:22:06.123205 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.217ms. Nov 23 23:22:06.123212 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:22:06.123218 systemd[1]: Detected virtualization microsoft. Nov 23 23:22:06.123224 systemd[1]: Detected architecture arm64. Nov 23 23:22:06.123230 systemd[1]: Detected first boot. Nov 23 23:22:06.123237 systemd[1]: Hostname set to . Nov 23 23:22:06.123252 systemd[1]: Initializing machine ID from random generator. Nov 23 23:22:06.123258 zram_generator::config[1300]: No configuration found. Nov 23 23:22:06.123265 kernel: NET: Registered PF_VSOCK protocol family Nov 23 23:22:06.123270 systemd[1]: Populated /etc with preset unit settings. Nov 23 23:22:06.123276 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 23:22:06.123282 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 23:22:06.123289 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 23:22:06.123295 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 23:22:06.123301 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 23:22:06.123307 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 23:22:06.123313 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 23:22:06.123319 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 23:22:06.123325 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 23:22:06.123332 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 23:22:06.123338 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 23:22:06.123343 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 23:22:06.123349 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:22:06.123355 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:22:06.123361 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 23:22:06.123367 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 23:22:06.123373 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 23:22:06.123380 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:22:06.123386 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 23 23:22:06.123394 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:22:06.123400 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:22:06.123406 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 23:22:06.123412 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 23:22:06.123418 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 23:22:06.123424 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 23:22:06.123431 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:22:06.123437 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:22:06.123443 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:22:06.123449 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:22:06.123455 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 23:22:06.123461 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 23:22:06.123469 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 23:22:06.123475 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:22:06.123481 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:22:06.123487 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:22:06.123493 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 23:22:06.123499 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 23:22:06.123505 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 23:22:06.123512 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 23:22:06.123518 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 23:22:06.123524 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 23:22:06.123530 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 23:22:06.123537 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 23:22:06.123543 systemd[1]: Reached target machines.target - Containers. Nov 23 23:22:06.123549 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 23:22:06.123555 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:22:06.123563 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:22:06.123569 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 23:22:06.123575 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:22:06.123582 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:22:06.123588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:22:06.123594 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 23:22:06.123600 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:22:06.123606 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 23:22:06.123613 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 23:22:06.123620 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 23:22:06.123626 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 23:22:06.123632 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 23:22:06.123638 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:22:06.123644 kernel: fuse: init (API version 7.41) Nov 23 23:22:06.123650 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:22:06.123656 kernel: loop: module loaded Nov 23 23:22:06.123662 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:22:06.123669 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:22:06.123675 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 23:22:06.123681 kernel: ACPI: bus type drm_connector registered Nov 23 23:22:06.123700 systemd-journald[1404]: Collecting audit messages is disabled. Nov 23 23:22:06.123715 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 23:22:06.123723 systemd-journald[1404]: Journal started Nov 23 23:22:06.123737 systemd-journald[1404]: Runtime Journal (/run/log/journal/d7a7f1b7120642f68a594d3dc9be8a22) is 8M, max 78.3M, 70.3M free. Nov 23 23:22:05.350799 systemd[1]: Queued start job for default target multi-user.target. Nov 23 23:22:05.355680 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 23 23:22:05.356040 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 23:22:05.356322 systemd[1]: systemd-journald.service: Consumed 2.378s CPU time. Nov 23 23:22:06.148577 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:22:06.158319 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 23:22:06.158352 systemd[1]: Stopped verity-setup.service. Nov 23 23:22:06.169010 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:22:06.169607 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 23:22:06.173787 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 23:22:06.178254 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 23:22:06.182167 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 23:22:06.186563 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 23:22:06.191000 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 23:22:06.196004 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 23:22:06.201053 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:22:06.206333 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 23:22:06.206469 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 23:22:06.212710 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:22:06.212842 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:22:06.217485 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:22:06.217613 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:22:06.222234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:22:06.222363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:22:06.227696 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 23:22:06.227831 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 23:22:06.232559 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:22:06.232681 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:22:06.237374 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:22:06.242392 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:22:06.247713 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 23:22:06.260073 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:22:06.265721 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 23:22:06.273659 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 23:22:06.280955 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 23:22:06.280982 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:22:06.286032 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 23:22:06.292175 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 23:22:06.296379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:22:06.367312 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 23:22:06.372815 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 23:22:06.378520 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:22:06.382741 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 23:22:06.390381 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:22:06.393354 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:22:06.400445 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 23:22:06.411389 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 23:22:06.422481 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 23:22:06.429972 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:22:06.435009 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 23:22:06.439934 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 23:22:06.447318 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 23:22:06.453086 systemd-journald[1404]: Time spent on flushing to /var/log/journal/d7a7f1b7120642f68a594d3dc9be8a22 is 17.727ms for 937 entries. Nov 23 23:22:06.453086 systemd-journald[1404]: System Journal (/var/log/journal/d7a7f1b7120642f68a594d3dc9be8a22) is 8M, max 2.6G, 2.6G free. Nov 23 23:22:06.562889 systemd-journald[1404]: Received client request to flush runtime journal. Nov 23 23:22:06.562942 kernel: loop0: detected capacity change from 0 to 100632 Nov 23 23:22:06.459563 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 23:22:06.468469 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 23:22:06.478292 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:22:06.564584 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 23:22:06.587679 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 23:22:06.590014 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 23:22:06.639586 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 23:22:06.645385 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:22:06.712906 systemd-tmpfiles[1453]: ACLs are not supported, ignoring. Nov 23 23:22:06.712920 systemd-tmpfiles[1453]: ACLs are not supported, ignoring. Nov 23 23:22:06.715908 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:22:07.019289 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 23:22:07.034103 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 23:22:07.040288 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:22:07.063544 systemd-udevd[1459]: Using default interface naming scheme 'v255'. Nov 23 23:22:07.074267 kernel: loop1: detected capacity change from 0 to 119840 Nov 23 23:22:07.246610 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:22:07.254885 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:22:07.306425 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 23:22:07.350983 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 23 23:22:07.383263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#75 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 23 23:22:07.394637 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 23:22:07.408253 kernel: mousedev: PS/2 mouse device common for all mice Nov 23 23:22:07.464249 kernel: hv_vmbus: registering driver hv_balloon Nov 23 23:22:07.464303 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 23 23:22:07.468267 kernel: hv_balloon: Memory hot add disabled on ARM64 Nov 23 23:22:07.468331 kernel: loop2: detected capacity change from 0 to 27936 Nov 23 23:22:07.475259 kernel: hv_vmbus: registering driver hyperv_fb Nov 23 23:22:07.485259 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 23 23:22:07.485308 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 23 23:22:07.504342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:22:07.511736 kernel: Console: switching to colour dummy device 80x25 Nov 23 23:22:07.515270 kernel: Console: switching to colour frame buffer device 128x48 Nov 23 23:22:07.522442 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:22:07.522573 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:22:07.533378 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:22:07.541175 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:22:07.541325 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:22:07.547046 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:22:07.595562 systemd-networkd[1465]: lo: Link UP Nov 23 23:22:07.595567 systemd-networkd[1465]: lo: Gained carrier Nov 23 23:22:07.596441 systemd-networkd[1465]: Enumeration completed Nov 23 23:22:07.596589 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:22:07.597015 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:22:07.597023 systemd-networkd[1465]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:22:07.606106 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 23:22:07.615319 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 23:22:07.656048 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 23 23:22:07.662749 kernel: mlx5_core 43f7:00:02.0 enP17399s1: Link up Nov 23 23:22:07.665702 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 23:22:07.686254 kernel: hv_netvsc 002248b7-0d9b-0022-48b7-0d9b002248b7 eth0: Data path switched to VF: enP17399s1 Nov 23 23:22:07.687165 systemd-networkd[1465]: enP17399s1: Link UP Nov 23 23:22:07.687490 systemd-networkd[1465]: eth0: Link UP Nov 23 23:22:07.687496 systemd-networkd[1465]: eth0: Gained carrier Nov 23 23:22:07.687509 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:22:07.689288 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 23:22:07.695627 systemd-networkd[1465]: enP17399s1: Gained carrier Nov 23 23:22:07.706335 kernel: MACsec IEEE 802.1AE Nov 23 23:22:07.706420 systemd-networkd[1465]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 23 23:22:07.719387 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 23:22:07.847276 kernel: loop3: detected capacity change from 0 to 211168 Nov 23 23:22:07.882286 kernel: loop4: detected capacity change from 0 to 100632 Nov 23 23:22:07.897266 kernel: loop5: detected capacity change from 0 to 119840 Nov 23 23:22:07.914278 kernel: loop6: detected capacity change from 0 to 27936 Nov 23 23:22:07.927254 kernel: loop7: detected capacity change from 0 to 211168 Nov 23 23:22:07.938556 (sd-merge)[1603]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 23 23:22:07.938898 (sd-merge)[1603]: Merged extensions into '/usr'. Nov 23 23:22:07.941968 systemd[1]: Reload requested from client PID 1438 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 23:22:07.942055 systemd[1]: Reloading... Nov 23 23:22:07.993274 zram_generator::config[1637]: No configuration found. Nov 23 23:22:08.155690 systemd[1]: Reloading finished in 213 ms. Nov 23 23:22:08.181799 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 23:22:08.187841 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:22:08.199041 systemd[1]: Starting ensure-sysext.service... Nov 23 23:22:08.204343 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:22:08.217034 systemd[1]: Reload requested from client PID 1691 ('systemctl') (unit ensure-sysext.service)... Nov 23 23:22:08.217047 systemd[1]: Reloading... Nov 23 23:22:08.252877 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 23:22:08.253202 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 23:22:08.253447 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 23:22:08.253581 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 23:22:08.253989 systemd-tmpfiles[1692]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 23:22:08.254123 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Nov 23 23:22:08.254149 systemd-tmpfiles[1692]: ACLs are not supported, ignoring. Nov 23 23:22:08.256263 zram_generator::config[1723]: No configuration found. Nov 23 23:22:08.276673 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:22:08.276680 systemd-tmpfiles[1692]: Skipping /boot Nov 23 23:22:08.281675 systemd-tmpfiles[1692]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:22:08.281687 systemd-tmpfiles[1692]: Skipping /boot Nov 23 23:22:08.413126 systemd[1]: Reloading finished in 195 ms. Nov 23 23:22:08.438838 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:22:08.449327 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:22:08.465416 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 23:22:08.471437 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 23:22:08.484465 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:22:08.490402 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 23:22:08.496717 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:22:08.503003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:22:08.512525 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:22:08.519406 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:22:08.523524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:22:08.523606 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:22:08.525866 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:22:08.528273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:22:08.533622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:22:08.533739 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:22:08.539132 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:22:08.539258 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:22:08.548352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:22:08.550445 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:22:08.560023 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:22:08.567044 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:22:08.571533 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:22:08.571627 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:22:08.572522 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 23:22:08.578042 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 23:22:08.583693 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:22:08.583801 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:22:08.589024 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:22:08.589131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:22:08.594374 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:22:08.594484 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:22:08.599324 systemd-resolved[1784]: Positive Trust Anchors: Nov 23 23:22:08.599526 systemd-resolved[1784]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:22:08.599549 systemd-resolved[1784]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:22:08.605647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:22:08.607428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:22:08.614335 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:22:08.619651 systemd-resolved[1784]: Using system hostname 'ci-4459.2.1-a-856cba2a05'. Nov 23 23:22:08.622159 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:22:08.627599 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:22:08.631782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:22:08.631864 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:22:08.631965 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 23:22:08.637013 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:22:08.641946 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:22:08.642080 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:22:08.646868 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:22:08.647156 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:22:08.651762 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:22:08.651868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:22:08.657443 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:22:08.657552 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:22:08.664366 systemd[1]: Finished ensure-sysext.service. Nov 23 23:22:08.669766 systemd[1]: Reached target network.target - Network. Nov 23 23:22:08.671838 augenrules[1829]: No rules Nov 23 23:22:08.673984 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:22:08.678610 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:22:08.678658 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:22:08.678840 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:22:08.680302 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:22:09.076632 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 23:22:09.082090 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 23:22:09.562415 systemd-networkd[1465]: eth0: Gained IPv6LL Nov 23 23:22:09.564538 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 23:22:09.569995 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 23:22:11.624165 ldconfig[1432]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 23:22:11.635834 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 23:22:11.642016 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 23:22:11.654263 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 23:22:11.658932 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:22:11.663600 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 23:22:11.668512 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 23:22:11.673589 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 23:22:11.677804 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 23:22:11.683180 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 23:22:11.688471 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 23:22:11.688495 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:22:11.692394 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:22:11.697208 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 23:22:11.703888 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 23:22:11.716900 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 23:22:11.722212 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 23:22:11.727475 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 23:22:11.733249 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 23:22:11.737800 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 23:22:11.743036 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 23:22:11.747555 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:22:11.751444 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:22:11.755290 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:22:11.755308 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:22:11.756980 systemd[1]: Starting chronyd.service - NTP client/server... Nov 23 23:22:11.770328 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 23:22:11.776370 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 23 23:22:11.783630 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 23:22:11.791784 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 23:22:11.798628 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 23:22:11.810104 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 23:22:11.811605 jq[1850]: false Nov 23 23:22:11.814319 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 23:22:11.815604 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 23 23:22:11.819745 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 23 23:22:11.822231 KVP[1852]: KVP starting; pid is:1852 Nov 23 23:22:11.822328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:22:11.823832 chronyd[1842]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Nov 23 23:22:11.830934 kernel: hv_utils: KVP IC version 4.0 Nov 23 23:22:11.830308 KVP[1852]: KVP LIC Version: 3.1 Nov 23 23:22:11.829348 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 23:22:11.835435 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 23:22:11.844811 chronyd[1842]: Timezone right/UTC failed leap second check, ignoring Nov 23 23:22:11.845132 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 23:22:11.849363 chronyd[1842]: Loaded seccomp filter (level 2) Nov 23 23:22:11.851013 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 23:22:11.859779 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 23:22:11.861605 extend-filesystems[1851]: Found /dev/sda6 Nov 23 23:22:11.873966 extend-filesystems[1851]: Found /dev/sda9 Nov 23 23:22:11.873966 extend-filesystems[1851]: Checking size of /dev/sda9 Nov 23 23:22:11.871574 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 23:22:11.878792 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 23:22:11.879089 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 23:22:11.890357 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 23:22:11.894827 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 23:22:11.902108 systemd[1]: Started chronyd.service - NTP client/server. Nov 23 23:22:11.903593 jq[1875]: true Nov 23 23:22:11.908373 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 23:22:11.915846 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 23:22:11.915989 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 23:22:11.917533 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 23:22:11.917678 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 23:22:11.924574 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 23:22:11.924892 extend-filesystems[1851]: Old size kept for /dev/sda9 Nov 23 23:22:11.925127 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 23:22:11.946641 update_engine[1874]: I20251123 23:22:11.944767 1874 main.cc:92] Flatcar Update Engine starting Nov 23 23:22:11.945039 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 23:22:11.947104 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 23:22:11.955300 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 23:22:11.970047 (ntainerd)[1892]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 23:22:11.977624 jq[1889]: true Nov 23 23:22:12.002696 tar[1885]: linux-arm64/LICENSE Nov 23 23:22:12.002897 tar[1885]: linux-arm64/helm Nov 23 23:22:12.006074 systemd-logind[1871]: New seat seat0. Nov 23 23:22:12.008615 systemd-logind[1871]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 23 23:22:12.008769 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 23:22:12.084365 dbus-daemon[1845]: [system] SELinux support is enabled Nov 23 23:22:12.085606 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 23:22:12.091341 update_engine[1874]: I20251123 23:22:12.090780 1874 update_check_scheduler.cc:74] Next update check in 7m29s Nov 23 23:22:12.094131 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 23:22:12.094681 dbus-daemon[1845]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 23 23:22:12.094154 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 23:22:12.101610 bash[1929]: Updated "/home/core/.ssh/authorized_keys" Nov 23 23:22:12.101350 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 23:22:12.101364 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 23:22:12.108866 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 23:22:12.120095 systemd[1]: Started update-engine.service - Update Engine. Nov 23 23:22:12.127093 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 23 23:22:12.130432 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 23:22:12.162315 coreos-metadata[1844]: Nov 23 23:22:12.162 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 23 23:22:12.173418 coreos-metadata[1844]: Nov 23 23:22:12.173 INFO Fetch successful Nov 23 23:22:12.173582 coreos-metadata[1844]: Nov 23 23:22:12.173 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 23 23:22:12.177661 coreos-metadata[1844]: Nov 23 23:22:12.177 INFO Fetch successful Nov 23 23:22:12.179291 coreos-metadata[1844]: Nov 23 23:22:12.179 INFO Fetching http://168.63.129.16/machine/eb48dea5-c80c-45fe-9dc6-2193cfca24cb/13aeba13%2Df815%2D4fc2%2Da489%2D78f7cde46fbb.%5Fci%2D4459.2.1%2Da%2D856cba2a05?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 23 23:22:12.209301 coreos-metadata[1844]: Nov 23 23:22:12.206 INFO Fetch successful Nov 23 23:22:12.209301 coreos-metadata[1844]: Nov 23 23:22:12.207 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 23 23:22:12.216255 coreos-metadata[1844]: Nov 23 23:22:12.216 INFO Fetch successful Nov 23 23:22:12.249614 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 23 23:22:12.257421 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 23:22:12.409993 locksmithd[1966]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 23:22:12.439264 sshd_keygen[1880]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 23:22:12.463465 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 23:22:12.470018 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 23:22:12.477348 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 23 23:22:12.500293 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 23:22:12.500442 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 23:22:12.508458 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 23:22:12.529626 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 23 23:22:12.539553 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 23:22:12.548713 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 23:22:12.559935 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 23 23:22:12.570925 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 23:22:12.592581 tar[1885]: linux-arm64/README.md Nov 23 23:22:12.601879 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 23:22:12.698015 containerd[1892]: time="2025-11-23T23:22:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 23:22:12.698471 containerd[1892]: time="2025-11-23T23:22:12.698443996Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 23:22:12.705255 containerd[1892]: time="2025-11-23T23:22:12.705004372Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.696µs" Nov 23 23:22:12.705255 containerd[1892]: time="2025-11-23T23:22:12.705037220Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 23:22:12.705255 containerd[1892]: time="2025-11-23T23:22:12.705049924Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 23:22:12.705255 containerd[1892]: time="2025-11-23T23:22:12.705165468Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 23:22:12.705255 containerd[1892]: time="2025-11-23T23:22:12.705176668Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 23:22:12.705255 containerd[1892]: time="2025-11-23T23:22:12.705191116Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:22:12.705255 containerd[1892]: time="2025-11-23T23:22:12.705226332Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:22:12.705255 containerd[1892]: time="2025-11-23T23:22:12.705232228Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:22:12.705403 containerd[1892]: time="2025-11-23T23:22:12.705384284Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:22:12.705403 containerd[1892]: time="2025-11-23T23:22:12.705398468Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:22:12.705436 containerd[1892]: time="2025-11-23T23:22:12.705411100Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:22:12.705436 containerd[1892]: time="2025-11-23T23:22:12.705417308Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 23:22:12.705496 containerd[1892]: time="2025-11-23T23:22:12.705476084Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 23:22:12.705625 containerd[1892]: time="2025-11-23T23:22:12.705610332Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:22:12.705646 containerd[1892]: time="2025-11-23T23:22:12.705633204Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:22:12.705646 containerd[1892]: time="2025-11-23T23:22:12.705639844Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 23:22:12.705680 containerd[1892]: time="2025-11-23T23:22:12.705670044Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 23:22:12.705835 containerd[1892]: time="2025-11-23T23:22:12.705822700Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 23:22:12.705893 containerd[1892]: time="2025-11-23T23:22:12.705881476Z" level=info msg="metadata content store policy set" policy=shared Nov 23 23:22:12.719075 containerd[1892]: time="2025-11-23T23:22:12.719042076Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 23:22:12.719150 containerd[1892]: time="2025-11-23T23:22:12.719090380Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 23:22:12.719150 containerd[1892]: time="2025-11-23T23:22:12.719101860Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 23:22:12.719150 containerd[1892]: time="2025-11-23T23:22:12.719110092Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 23:22:12.719150 containerd[1892]: time="2025-11-23T23:22:12.719118924Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 23:22:12.719150 containerd[1892]: time="2025-11-23T23:22:12.719125484Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 23:22:12.719150 containerd[1892]: time="2025-11-23T23:22:12.719133204Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 23:22:12.719150 containerd[1892]: time="2025-11-23T23:22:12.719140676Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 23:22:12.719150 containerd[1892]: time="2025-11-23T23:22:12.719148340Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719155380Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719161164Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719168996Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719301308Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719317252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719326900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719333932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719343628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719350868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719358180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719365036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719372500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719379044Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719385508Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 23:22:12.719761 containerd[1892]: time="2025-11-23T23:22:12.719422716Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 23:22:12.719968 containerd[1892]: time="2025-11-23T23:22:12.719433444Z" level=info msg="Start snapshots syncer" Nov 23 23:22:12.719968 containerd[1892]: time="2025-11-23T23:22:12.719447948Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 23:22:12.719968 containerd[1892]: time="2025-11-23T23:22:12.719602852Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719634092Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719660292Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719741740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719754588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719762500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719768652Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719776964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719783540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719790084Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719809692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719816820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719822900Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719837900Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:22:12.720052 containerd[1892]: time="2025-11-23T23:22:12.719846860Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:22:12.720217 containerd[1892]: time="2025-11-23T23:22:12.719852044Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:22:12.720217 containerd[1892]: time="2025-11-23T23:22:12.719857468Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:22:12.720217 containerd[1892]: time="2025-11-23T23:22:12.719862084Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 23:22:12.720217 containerd[1892]: time="2025-11-23T23:22:12.719867932Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 23:22:12.720217 containerd[1892]: time="2025-11-23T23:22:12.719874492Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 23:22:12.720217 containerd[1892]: time="2025-11-23T23:22:12.719885964Z" level=info msg="runtime interface created" Nov 23 23:22:12.720217 containerd[1892]: time="2025-11-23T23:22:12.719889204Z" level=info msg="created NRI interface" Nov 23 23:22:12.720217 containerd[1892]: time="2025-11-23T23:22:12.719893996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 23:22:12.720217 containerd[1892]: time="2025-11-23T23:22:12.719902124Z" level=info msg="Connect containerd service" Nov 23 23:22:12.720217 containerd[1892]: time="2025-11-23T23:22:12.719914508Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 23:22:12.720980 containerd[1892]: time="2025-11-23T23:22:12.720472100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:22:12.745232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:22:12.916424 (kubelet)[2046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:22:13.164309 containerd[1892]: time="2025-11-23T23:22:13.163878492Z" level=info msg="Start subscribing containerd event" Nov 23 23:22:13.164309 containerd[1892]: time="2025-11-23T23:22:13.163933972Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 23:22:13.164309 containerd[1892]: time="2025-11-23T23:22:13.163962900Z" level=info msg="Start recovering state" Nov 23 23:22:13.164309 containerd[1892]: time="2025-11-23T23:22:13.163986012Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 23:22:13.164309 containerd[1892]: time="2025-11-23T23:22:13.164042420Z" level=info msg="Start event monitor" Nov 23 23:22:13.164309 containerd[1892]: time="2025-11-23T23:22:13.164052764Z" level=info msg="Start cni network conf syncer for default" Nov 23 23:22:13.164309 containerd[1892]: time="2025-11-23T23:22:13.164057708Z" level=info msg="Start streaming server" Nov 23 23:22:13.164309 containerd[1892]: time="2025-11-23T23:22:13.164063828Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 23:22:13.164309 containerd[1892]: time="2025-11-23T23:22:13.164070012Z" level=info msg="runtime interface starting up..." Nov 23 23:22:13.164309 containerd[1892]: time="2025-11-23T23:22:13.164075172Z" level=info msg="starting plugins..." Nov 23 23:22:13.164309 containerd[1892]: time="2025-11-23T23:22:13.164085388Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 23:22:13.164309 containerd[1892]: time="2025-11-23T23:22:13.164208004Z" level=info msg="containerd successfully booted in 0.466548s" Nov 23 23:22:13.164332 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 23:22:13.171502 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 23:22:13.176897 systemd[1]: Startup finished in 1.629s (kernel) + 11.455s (initrd) + 10.785s (userspace) = 23.869s. Nov 23 23:22:13.300088 kubelet[2046]: E1123 23:22:13.300056 2046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:22:13.302117 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:22:13.302222 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:22:13.303218 systemd[1]: kubelet.service: Consumed 551ms CPU time, 257.6M memory peak. Nov 23 23:22:13.477900 login[2028]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 23 23:22:13.479537 login[2029]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:13.489106 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 23:22:13.489292 systemd-logind[1871]: New session 2 of user core. Nov 23 23:22:13.490578 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 23:22:13.513108 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 23:22:13.515751 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 23:22:13.530860 (systemd)[2065]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 23:22:13.532592 systemd-logind[1871]: New session c1 of user core. Nov 23 23:22:13.660992 systemd[2065]: Queued start job for default target default.target. Nov 23 23:22:13.664868 systemd[2065]: Created slice app.slice - User Application Slice. Nov 23 23:22:13.664889 systemd[2065]: Reached target paths.target - Paths. Nov 23 23:22:13.664914 systemd[2065]: Reached target timers.target - Timers. Nov 23 23:22:13.665759 systemd[2065]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 23:22:13.672369 systemd[2065]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 23:22:13.672412 systemd[2065]: Reached target sockets.target - Sockets. Nov 23 23:22:13.672452 systemd[2065]: Reached target basic.target - Basic System. Nov 23 23:22:13.672473 systemd[2065]: Reached target default.target - Main User Target. Nov 23 23:22:13.672491 systemd[2065]: Startup finished in 135ms. Nov 23 23:22:13.672570 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 23:22:13.678652 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 23:22:14.293726 waagent[2026]: 2025-11-23T23:22:14.293657Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Nov 23 23:22:14.297975 waagent[2026]: 2025-11-23T23:22:14.297936Z INFO Daemon Daemon OS: flatcar 4459.2.1 Nov 23 23:22:14.301293 waagent[2026]: 2025-11-23T23:22:14.301260Z INFO Daemon Daemon Python: 3.11.13 Nov 23 23:22:14.306321 waagent[2026]: 2025-11-23T23:22:14.306281Z INFO Daemon Daemon Run daemon Nov 23 23:22:14.309414 waagent[2026]: 2025-11-23T23:22:14.309266Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.1' Nov 23 23:22:14.315728 waagent[2026]: 2025-11-23T23:22:14.315701Z INFO Daemon Daemon Using waagent for provisioning Nov 23 23:22:14.319591 waagent[2026]: 2025-11-23T23:22:14.319561Z INFO Daemon Daemon Activate resource disk Nov 23 23:22:14.323144 waagent[2026]: 2025-11-23T23:22:14.323118Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 23 23:22:14.331175 waagent[2026]: 2025-11-23T23:22:14.331143Z INFO Daemon Daemon Found device: None Nov 23 23:22:14.334448 waagent[2026]: 2025-11-23T23:22:14.334419Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 23 23:22:14.340704 waagent[2026]: 2025-11-23T23:22:14.340679Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 23 23:22:14.349108 waagent[2026]: 2025-11-23T23:22:14.349073Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 23 23:22:14.353389 waagent[2026]: 2025-11-23T23:22:14.353361Z INFO Daemon Daemon Running default provisioning handler Nov 23 23:22:14.361594 waagent[2026]: 2025-11-23T23:22:14.361559Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 23 23:22:14.371301 waagent[2026]: 2025-11-23T23:22:14.371266Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 23 23:22:14.378144 waagent[2026]: 2025-11-23T23:22:14.378116Z INFO Daemon Daemon cloud-init is enabled: False Nov 23 23:22:14.381949 waagent[2026]: 2025-11-23T23:22:14.381923Z INFO Daemon Daemon Copying ovf-env.xml Nov 23 23:22:14.474903 waagent[2026]: 2025-11-23T23:22:14.474822Z INFO Daemon Daemon Successfully mounted dvd Nov 23 23:22:14.478211 login[2028]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:14.483638 systemd-logind[1871]: New session 1 of user core. Nov 23 23:22:14.486389 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 23:22:14.504806 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 23 23:22:14.507326 waagent[2026]: 2025-11-23T23:22:14.507291Z INFO Daemon Daemon Detect protocol endpoint Nov 23 23:22:14.511527 waagent[2026]: 2025-11-23T23:22:14.511388Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 23 23:22:14.515929 waagent[2026]: 2025-11-23T23:22:14.515889Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 23 23:22:14.520854 waagent[2026]: 2025-11-23T23:22:14.520818Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 23 23:22:14.524656 waagent[2026]: 2025-11-23T23:22:14.524621Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 23 23:22:14.528440 waagent[2026]: 2025-11-23T23:22:14.528411Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 23 23:22:14.571768 waagent[2026]: 2025-11-23T23:22:14.571673Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 23 23:22:14.576757 waagent[2026]: 2025-11-23T23:22:14.576727Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 23 23:22:14.580618 waagent[2026]: 2025-11-23T23:22:14.580591Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 23 23:22:14.673320 waagent[2026]: 2025-11-23T23:22:14.673248Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 23 23:22:14.678173 waagent[2026]: 2025-11-23T23:22:14.678141Z INFO Daemon Daemon Forcing an update of the goal state. Nov 23 23:22:14.686090 waagent[2026]: 2025-11-23T23:22:14.686052Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 23 23:22:14.703470 waagent[2026]: 2025-11-23T23:22:14.703440Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 23 23:22:14.707759 waagent[2026]: 2025-11-23T23:22:14.707725Z INFO Daemon Nov 23 23:22:14.709820 waagent[2026]: 2025-11-23T23:22:14.709792Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: cdc3660b-5026-4634-9a8e-1e98f64a46fa eTag: 8946080211082942073 source: Fabric] Nov 23 23:22:14.718181 waagent[2026]: 2025-11-23T23:22:14.718150Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 23 23:22:14.722979 waagent[2026]: 2025-11-23T23:22:14.722946Z INFO Daemon Nov 23 23:22:14.725114 waagent[2026]: 2025-11-23T23:22:14.725085Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 23 23:22:14.733061 waagent[2026]: 2025-11-23T23:22:14.733032Z INFO Daemon Daemon Downloading artifacts profile blob Nov 23 23:22:14.852295 waagent[2026]: 2025-11-23T23:22:14.852159Z INFO Daemon Downloaded certificate {'thumbprint': '723CFB4649BC1C422D9047F9803574BBD98A2935', 'hasPrivateKey': True} Nov 23 23:22:14.859710 waagent[2026]: 2025-11-23T23:22:14.859674Z INFO Daemon Fetch goal state completed Nov 23 23:22:14.894231 waagent[2026]: 2025-11-23T23:22:14.894198Z INFO Daemon Daemon Starting provisioning Nov 23 23:22:14.897892 waagent[2026]: 2025-11-23T23:22:14.897859Z INFO Daemon Daemon Handle ovf-env.xml. Nov 23 23:22:14.901306 waagent[2026]: 2025-11-23T23:22:14.901280Z INFO Daemon Daemon Set hostname [ci-4459.2.1-a-856cba2a05] Nov 23 23:22:14.907563 waagent[2026]: 2025-11-23T23:22:14.907526Z INFO Daemon Daemon Publish hostname [ci-4459.2.1-a-856cba2a05] Nov 23 23:22:14.912102 waagent[2026]: 2025-11-23T23:22:14.912068Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 23 23:22:14.916525 waagent[2026]: 2025-11-23T23:22:14.916496Z INFO Daemon Daemon Primary interface is [eth0] Nov 23 23:22:14.925579 systemd-networkd[1465]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:22:14.925584 systemd-networkd[1465]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:22:14.925607 systemd-networkd[1465]: eth0: DHCP lease lost Nov 23 23:22:14.927323 waagent[2026]: 2025-11-23T23:22:14.926380Z INFO Daemon Daemon Create user account if not exists Nov 23 23:22:14.930479 waagent[2026]: 2025-11-23T23:22:14.930445Z INFO Daemon Daemon User core already exists, skip useradd Nov 23 23:22:14.937493 waagent[2026]: 2025-11-23T23:22:14.934508Z INFO Daemon Daemon Configure sudoer Nov 23 23:22:14.940972 waagent[2026]: 2025-11-23T23:22:14.940932Z INFO Daemon Daemon Configure sshd Nov 23 23:22:14.946802 waagent[2026]: 2025-11-23T23:22:14.946765Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 23 23:22:14.955791 waagent[2026]: 2025-11-23T23:22:14.955763Z INFO Daemon Daemon Deploy ssh public key. Nov 23 23:22:14.959275 systemd-networkd[1465]: eth0: DHCPv4 address 10.200.20.43/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 23 23:22:16.047193 waagent[2026]: 2025-11-23T23:22:16.047146Z INFO Daemon Daemon Provisioning complete Nov 23 23:22:16.060093 waagent[2026]: 2025-11-23T23:22:16.060058Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 23 23:22:16.064520 waagent[2026]: 2025-11-23T23:22:16.064488Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 23 23:22:16.072583 waagent[2026]: 2025-11-23T23:22:16.072555Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Nov 23 23:22:16.169160 waagent[2115]: 2025-11-23T23:22:16.169110Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Nov 23 23:22:16.170279 waagent[2115]: 2025-11-23T23:22:16.169503Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.1 Nov 23 23:22:16.170279 waagent[2115]: 2025-11-23T23:22:16.169557Z INFO ExtHandler ExtHandler Python: 3.11.13 Nov 23 23:22:16.170279 waagent[2115]: 2025-11-23T23:22:16.169592Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Nov 23 23:22:16.189747 waagent[2115]: 2025-11-23T23:22:16.189713Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Nov 23 23:22:16.189932 waagent[2115]: 2025-11-23T23:22:16.189907Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 23 23:22:16.190034 waagent[2115]: 2025-11-23T23:22:16.190013Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 23 23:22:16.195217 waagent[2115]: 2025-11-23T23:22:16.195170Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 23 23:22:16.199766 waagent[2115]: 2025-11-23T23:22:16.199737Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 23 23:22:16.200171 waagent[2115]: 2025-11-23T23:22:16.200138Z INFO ExtHandler Nov 23 23:22:16.200319 waagent[2115]: 2025-11-23T23:22:16.200293Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d125870e-d88a-45de-8580-52cec0df4bd1 eTag: 8946080211082942073 source: Fabric] Nov 23 23:22:16.200635 waagent[2115]: 2025-11-23T23:22:16.200605Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 23 23:22:16.201122 waagent[2115]: 2025-11-23T23:22:16.201090Z INFO ExtHandler Nov 23 23:22:16.201222 waagent[2115]: 2025-11-23T23:22:16.201203Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 23 23:22:16.205275 waagent[2115]: 2025-11-23T23:22:16.204309Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 23 23:22:16.258574 waagent[2115]: 2025-11-23T23:22:16.258533Z INFO ExtHandler Downloaded certificate {'thumbprint': '723CFB4649BC1C422D9047F9803574BBD98A2935', 'hasPrivateKey': True} Nov 23 23:22:16.258985 waagent[2115]: 2025-11-23T23:22:16.258954Z INFO ExtHandler Fetch goal state completed Nov 23 23:22:16.270197 waagent[2115]: 2025-11-23T23:22:16.270165Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Nov 23 23:22:16.273358 waagent[2115]: 2025-11-23T23:22:16.273324Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2115 Nov 23 23:22:16.273527 waagent[2115]: 2025-11-23T23:22:16.273500Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 23 23:22:16.273844 waagent[2115]: 2025-11-23T23:22:16.273816Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Nov 23 23:22:16.274981 waagent[2115]: 2025-11-23T23:22:16.274946Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.1', '', 'Flatcar Container Linux by Kinvolk'] Nov 23 23:22:16.275399 waagent[2115]: 2025-11-23T23:22:16.275364Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Nov 23 23:22:16.275597 waagent[2115]: 2025-11-23T23:22:16.275570Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 23 23:22:16.276091 waagent[2115]: 2025-11-23T23:22:16.276061Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 23 23:22:16.312919 waagent[2115]: 2025-11-23T23:22:16.312863Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 23 23:22:16.313117 waagent[2115]: 2025-11-23T23:22:16.313087Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 23 23:22:16.317365 waagent[2115]: 2025-11-23T23:22:16.317335Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 23 23:22:16.321709 systemd[1]: Reload requested from client PID 2130 ('systemctl') (unit waagent.service)... Nov 23 23:22:16.321887 systemd[1]: Reloading... Nov 23 23:22:16.393541 zram_generator::config[2169]: No configuration found. Nov 23 23:22:16.537535 systemd[1]: Reloading finished in 215 ms. Nov 23 23:22:16.550334 waagent[2115]: 2025-11-23T23:22:16.550119Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 23 23:22:16.553208 waagent[2115]: 2025-11-23T23:22:16.552380Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 23 23:22:17.471279 waagent[2115]: 2025-11-23T23:22:17.471111Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 23 23:22:17.471569 waagent[2115]: 2025-11-23T23:22:17.471468Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 23 23:22:17.472186 waagent[2115]: 2025-11-23T23:22:17.472094Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 23 23:22:17.472491 waagent[2115]: 2025-11-23T23:22:17.472459Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 23 23:22:17.472575 waagent[2115]: 2025-11-23T23:22:17.472528Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 23 23:22:17.472752 waagent[2115]: 2025-11-23T23:22:17.472725Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 23 23:22:17.472951 waagent[2115]: 2025-11-23T23:22:17.472914Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 23 23:22:17.473000 waagent[2115]: 2025-11-23T23:22:17.472971Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 23 23:22:17.473080 waagent[2115]: 2025-11-23T23:22:17.473058Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 23 23:22:17.473080 waagent[2115]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 23 23:22:17.473080 waagent[2115]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Nov 23 23:22:17.473080 waagent[2115]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 23 23:22:17.473080 waagent[2115]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 23 23:22:17.473080 waagent[2115]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 23 23:22:17.473080 waagent[2115]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 23 23:22:17.473500 waagent[2115]: 2025-11-23T23:22:17.473467Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 23 23:22:17.473539 waagent[2115]: 2025-11-23T23:22:17.473506Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 23 23:22:17.473758 waagent[2115]: 2025-11-23T23:22:17.473726Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 23 23:22:17.473866 waagent[2115]: 2025-11-23T23:22:17.473831Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 23 23:22:17.474086 waagent[2115]: 2025-11-23T23:22:17.474054Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 23 23:22:17.474412 waagent[2115]: 2025-11-23T23:22:17.474386Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 23 23:22:17.474596 waagent[2115]: 2025-11-23T23:22:17.474567Z INFO EnvHandler ExtHandler Configure routes Nov 23 23:22:17.477147 waagent[2115]: 2025-11-23T23:22:17.477118Z INFO EnvHandler ExtHandler Gateway:None Nov 23 23:22:17.478445 waagent[2115]: 2025-11-23T23:22:17.478418Z INFO EnvHandler ExtHandler Routes:None Nov 23 23:22:17.479467 waagent[2115]: 2025-11-23T23:22:17.479432Z INFO ExtHandler ExtHandler Nov 23 23:22:17.479776 waagent[2115]: 2025-11-23T23:22:17.479750Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1889a5f9-7f72-4766-99be-786ca486f169 correlation 654af458-ef67-47c7-8d18-c0212309901e created: 2025-11-23T23:21:20.989784Z] Nov 23 23:22:17.480404 waagent[2115]: 2025-11-23T23:22:17.480364Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 23 23:22:17.481059 waagent[2115]: 2025-11-23T23:22:17.481026Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Nov 23 23:22:17.570227 waagent[2115]: 2025-11-23T23:22:17.569830Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Nov 23 23:22:17.570227 waagent[2115]: Try `iptables -h' or 'iptables --help' for more information.) Nov 23 23:22:17.570227 waagent[2115]: 2025-11-23T23:22:17.570161Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 7866911A-D2A8-403C-A3DF-D6CA3B9DFAC8;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Nov 23 23:22:17.743194 waagent[2115]: 2025-11-23T23:22:17.743096Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Nov 23 23:22:17.743194 waagent[2115]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 23 23:22:17.743194 waagent[2115]: pkts bytes target prot opt in out source destination Nov 23 23:22:17.743194 waagent[2115]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 23 23:22:17.743194 waagent[2115]: pkts bytes target prot opt in out source destination Nov 23 23:22:17.743194 waagent[2115]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 23 23:22:17.743194 waagent[2115]: pkts bytes target prot opt in out source destination Nov 23 23:22:17.743194 waagent[2115]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 23 23:22:17.743194 waagent[2115]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 23 23:22:17.743194 waagent[2115]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 23 23:22:17.746210 waagent[2115]: 2025-11-23T23:22:17.746175Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 23 23:22:17.746210 waagent[2115]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 23 23:22:17.746210 waagent[2115]: pkts bytes target prot opt in out source destination Nov 23 23:22:17.746210 waagent[2115]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 23 23:22:17.746210 waagent[2115]: pkts bytes target prot opt in out source destination Nov 23 23:22:17.746210 waagent[2115]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 23 23:22:17.746210 waagent[2115]: pkts bytes target prot opt in out source destination Nov 23 23:22:17.746210 waagent[2115]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 23 23:22:17.746210 waagent[2115]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 23 23:22:17.746210 waagent[2115]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 23 23:22:17.746601 waagent[2115]: 2025-11-23T23:22:17.746578Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 23 23:22:17.771158 waagent[2115]: 2025-11-23T23:22:17.771110Z INFO MonitorHandler ExtHandler Network interfaces: Nov 23 23:22:17.771158 waagent[2115]: Executing ['ip', '-a', '-o', 'link']: Nov 23 23:22:17.771158 waagent[2115]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 23 23:22:17.771158 waagent[2115]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:0d:9b brd ff:ff:ff:ff:ff:ff Nov 23 23:22:17.771158 waagent[2115]: 3: enP17399s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b7:0d:9b brd ff:ff:ff:ff:ff:ff\ altname enP17399p0s2 Nov 23 23:22:17.771158 waagent[2115]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 23 23:22:17.771158 waagent[2115]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 23 23:22:17.771158 waagent[2115]: 2: eth0 inet 10.200.20.43/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 23 23:22:17.771158 waagent[2115]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 23 23:22:17.771158 waagent[2115]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 23 23:22:17.771158 waagent[2115]: 2: eth0 inet6 fe80::222:48ff:feb7:d9b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 23 23:22:23.552961 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 23:22:23.554249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:22:23.647685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:22:23.654612 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:22:23.776156 kubelet[2265]: E1123 23:22:23.776112 2265 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:22:23.778953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:22:23.779059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:22:23.780339 systemd[1]: kubelet.service: Consumed 106ms CPU time, 107.2M memory peak. Nov 23 23:22:34.029640 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 23:22:34.031209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:22:34.128031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:22:34.130722 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:22:34.276784 kubelet[2280]: E1123 23:22:34.276743 2280 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:22:34.278870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:22:34.278971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:22:34.279465 systemd[1]: kubelet.service: Consumed 102ms CPU time, 105.6M memory peak. Nov 23 23:22:35.657406 chronyd[1842]: Selected source PHC0 Nov 23 23:22:37.296597 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 23:22:37.297745 systemd[1]: Started sshd@0-10.200.20.43:22-10.200.16.10:38300.service - OpenSSH per-connection server daemon (10.200.16.10:38300). Nov 23 23:22:37.849135 sshd[2287]: Accepted publickey for core from 10.200.16.10 port 38300 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:37.850058 sshd-session[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:37.853989 systemd-logind[1871]: New session 3 of user core. Nov 23 23:22:37.862358 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 23:22:38.242566 systemd[1]: Started sshd@1-10.200.20.43:22-10.200.16.10:38304.service - OpenSSH per-connection server daemon (10.200.16.10:38304). Nov 23 23:22:38.661301 sshd[2293]: Accepted publickey for core from 10.200.16.10 port 38304 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:38.662176 sshd-session[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:38.665487 systemd-logind[1871]: New session 4 of user core. Nov 23 23:22:38.671377 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 23:22:38.972265 sshd[2296]: Connection closed by 10.200.16.10 port 38304 Nov 23 23:22:38.972807 sshd-session[2293]: pam_unix(sshd:session): session closed for user core Nov 23 23:22:38.976186 systemd[1]: sshd@1-10.200.20.43:22-10.200.16.10:38304.service: Deactivated successfully. Nov 23 23:22:38.977682 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 23:22:38.978888 systemd-logind[1871]: Session 4 logged out. Waiting for processes to exit. Nov 23 23:22:38.979858 systemd-logind[1871]: Removed session 4. Nov 23 23:22:39.052986 systemd[1]: Started sshd@2-10.200.20.43:22-10.200.16.10:38320.service - OpenSSH per-connection server daemon (10.200.16.10:38320). Nov 23 23:22:39.506388 sshd[2302]: Accepted publickey for core from 10.200.16.10 port 38320 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:39.507378 sshd-session[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:39.511050 systemd-logind[1871]: New session 5 of user core. Nov 23 23:22:39.517342 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 23:22:39.831504 sshd[2305]: Connection closed by 10.200.16.10 port 38320 Nov 23 23:22:39.832016 sshd-session[2302]: pam_unix(sshd:session): session closed for user core Nov 23 23:22:39.834762 systemd[1]: sshd@2-10.200.20.43:22-10.200.16.10:38320.service: Deactivated successfully. Nov 23 23:22:39.835989 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 23:22:39.837730 systemd-logind[1871]: Session 5 logged out. Waiting for processes to exit. Nov 23 23:22:39.838633 systemd-logind[1871]: Removed session 5. Nov 23 23:22:39.912339 systemd[1]: Started sshd@3-10.200.20.43:22-10.200.16.10:38334.service - OpenSSH per-connection server daemon (10.200.16.10:38334). Nov 23 23:22:40.373539 sshd[2311]: Accepted publickey for core from 10.200.16.10 port 38334 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:40.374426 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:40.377781 systemd-logind[1871]: New session 6 of user core. Nov 23 23:22:40.388356 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 23:22:40.694702 sshd[2314]: Connection closed by 10.200.16.10 port 38334 Nov 23 23:22:40.695186 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Nov 23 23:22:40.697871 systemd[1]: sshd@3-10.200.20.43:22-10.200.16.10:38334.service: Deactivated successfully. Nov 23 23:22:40.699161 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 23:22:40.700458 systemd-logind[1871]: Session 6 logged out. Waiting for processes to exit. Nov 23 23:22:40.703328 systemd-logind[1871]: Removed session 6. Nov 23 23:22:40.784418 systemd[1]: Started sshd@4-10.200.20.43:22-10.200.16.10:48950.service - OpenSSH per-connection server daemon (10.200.16.10:48950). Nov 23 23:22:41.241945 sshd[2320]: Accepted publickey for core from 10.200.16.10 port 48950 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:41.242913 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:41.246313 systemd-logind[1871]: New session 7 of user core. Nov 23 23:22:41.252533 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 23:22:41.620978 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 23:22:41.621204 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:22:41.652507 sudo[2324]: pam_unix(sudo:session): session closed for user root Nov 23 23:22:41.730071 sshd[2323]: Connection closed by 10.200.16.10 port 48950 Nov 23 23:22:41.730937 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Nov 23 23:22:41.734349 systemd[1]: sshd@4-10.200.20.43:22-10.200.16.10:48950.service: Deactivated successfully. Nov 23 23:22:41.735930 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 23:22:41.736754 systemd-logind[1871]: Session 7 logged out. Waiting for processes to exit. Nov 23 23:22:41.738087 systemd-logind[1871]: Removed session 7. Nov 23 23:22:41.809962 systemd[1]: Started sshd@5-10.200.20.43:22-10.200.16.10:48960.service - OpenSSH per-connection server daemon (10.200.16.10:48960). Nov 23 23:22:42.265509 sshd[2330]: Accepted publickey for core from 10.200.16.10 port 48960 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:42.266496 sshd-session[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:42.270054 systemd-logind[1871]: New session 8 of user core. Nov 23 23:22:42.279362 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 23:22:42.521619 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 23:22:42.521833 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:22:42.527294 sudo[2335]: pam_unix(sudo:session): session closed for user root Nov 23 23:22:42.530572 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 23:22:42.530753 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:22:42.537256 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:22:42.562679 augenrules[2357]: No rules Nov 23 23:22:42.563674 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:22:42.563954 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:22:42.565392 sudo[2334]: pam_unix(sudo:session): session closed for user root Nov 23 23:22:42.636262 sshd[2333]: Connection closed by 10.200.16.10 port 48960 Nov 23 23:22:42.636597 sshd-session[2330]: pam_unix(sshd:session): session closed for user core Nov 23 23:22:42.639383 systemd[1]: sshd@5-10.200.20.43:22-10.200.16.10:48960.service: Deactivated successfully. Nov 23 23:22:42.640617 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 23:22:42.641195 systemd-logind[1871]: Session 8 logged out. Waiting for processes to exit. Nov 23 23:22:42.642388 systemd-logind[1871]: Removed session 8. Nov 23 23:22:42.711416 systemd[1]: Started sshd@6-10.200.20.43:22-10.200.16.10:48964.service - OpenSSH per-connection server daemon (10.200.16.10:48964). Nov 23 23:22:43.124302 sshd[2366]: Accepted publickey for core from 10.200.16.10 port 48964 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:43.125277 sshd-session[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:43.128445 systemd-logind[1871]: New session 9 of user core. Nov 23 23:22:43.135355 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 23:22:43.360963 sudo[2370]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 23:22:43.361162 sudo[2370]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:22:44.529415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 23 23:22:44.530686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:22:44.684878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:22:44.689599 (kubelet)[2396]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:22:44.723873 kubelet[2396]: E1123 23:22:44.723824 2396 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:22:44.725771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:22:44.725875 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:22:44.726124 systemd[1]: kubelet.service: Consumed 105ms CPU time, 107.3M memory peak. Nov 23 23:22:45.271585 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 23:22:45.281467 (dockerd)[2403]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 23:22:45.984279 dockerd[2403]: time="2025-11-23T23:22:45.984054654Z" level=info msg="Starting up" Nov 23 23:22:45.986752 dockerd[2403]: time="2025-11-23T23:22:45.986728740Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 23:22:45.994430 dockerd[2403]: time="2025-11-23T23:22:45.994396907Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 23:22:46.018386 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3347909107-merged.mount: Deactivated successfully. Nov 23 23:22:46.113104 dockerd[2403]: time="2025-11-23T23:22:46.113073061Z" level=info msg="Loading containers: start." Nov 23 23:22:46.168271 kernel: Initializing XFRM netlink socket Nov 23 23:22:46.486951 systemd-networkd[1465]: docker0: Link UP Nov 23 23:22:46.498656 dockerd[2403]: time="2025-11-23T23:22:46.498590974Z" level=info msg="Loading containers: done." Nov 23 23:22:46.517141 dockerd[2403]: time="2025-11-23T23:22:46.516873402Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 23:22:46.517141 dockerd[2403]: time="2025-11-23T23:22:46.516936812Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 23:22:46.517141 dockerd[2403]: time="2025-11-23T23:22:46.517011686Z" level=info msg="Initializing buildkit" Nov 23 23:22:46.553894 dockerd[2403]: time="2025-11-23T23:22:46.553835799Z" level=info msg="Completed buildkit initialization" Nov 23 23:22:46.559483 dockerd[2403]: time="2025-11-23T23:22:46.559451564Z" level=info msg="Daemon has completed initialization" Nov 23 23:22:46.559651 dockerd[2403]: time="2025-11-23T23:22:46.559530422Z" level=info msg="API listen on /run/docker.sock" Nov 23 23:22:46.559833 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 23:22:47.015567 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3450989467-merged.mount: Deactivated successfully. Nov 23 23:22:47.715275 containerd[1892]: time="2025-11-23T23:22:47.715027544Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 23 23:22:48.543975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177371905.mount: Deactivated successfully. Nov 23 23:22:49.451816 containerd[1892]: time="2025-11-23T23:22:49.451218682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:49.453306 containerd[1892]: time="2025-11-23T23:22:49.453277948Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=27385704" Nov 23 23:22:49.456075 containerd[1892]: time="2025-11-23T23:22:49.456048877Z" level=info msg="ImageCreate event name:\"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:49.459512 containerd[1892]: time="2025-11-23T23:22:49.459483876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:49.460170 containerd[1892]: time="2025-11-23T23:22:49.460150185Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"27382303\" in 1.745086032s" Nov 23 23:22:49.460261 containerd[1892]: time="2025-11-23T23:22:49.460248869Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\"" Nov 23 23:22:49.461767 containerd[1892]: time="2025-11-23T23:22:49.461741301Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 23 23:22:50.561084 containerd[1892]: time="2025-11-23T23:22:50.561033695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:50.565207 containerd[1892]: time="2025-11-23T23:22:50.565166556Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=23551824" Nov 23 23:22:50.566011 containerd[1892]: time="2025-11-23T23:22:50.565970582Z" level=info msg="ImageCreate event name:\"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:50.569874 containerd[1892]: time="2025-11-23T23:22:50.569835522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:50.570650 containerd[1892]: time="2025-11-23T23:22:50.570369275Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"25136308\" in 1.108603334s" Nov 23 23:22:50.570650 containerd[1892]: time="2025-11-23T23:22:50.570393332Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\"" Nov 23 23:22:50.570975 containerd[1892]: time="2025-11-23T23:22:50.570926997Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 23 23:22:51.586447 containerd[1892]: time="2025-11-23T23:22:51.586396015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:51.588590 containerd[1892]: time="2025-11-23T23:22:51.588544060Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=18296696" Nov 23 23:22:51.590966 containerd[1892]: time="2025-11-23T23:22:51.590934193Z" level=info msg="ImageCreate event name:\"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:51.595857 containerd[1892]: time="2025-11-23T23:22:51.595823990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:51.597094 containerd[1892]: time="2025-11-23T23:22:51.596948522Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"19881198\" in 1.025997372s" Nov 23 23:22:51.597094 containerd[1892]: time="2025-11-23T23:22:51.596973563Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\"" Nov 23 23:22:51.597592 containerd[1892]: time="2025-11-23T23:22:51.597471131Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 23 23:22:52.498771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3772582332.mount: Deactivated successfully. Nov 23 23:22:52.763883 containerd[1892]: time="2025-11-23T23:22:52.763759585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:52.766343 containerd[1892]: time="2025-11-23T23:22:52.766320606Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=28257769" Nov 23 23:22:52.768969 containerd[1892]: time="2025-11-23T23:22:52.768942981Z" level=info msg="ImageCreate event name:\"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:52.772154 containerd[1892]: time="2025-11-23T23:22:52.772130934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:52.772600 containerd[1892]: time="2025-11-23T23:22:52.772377367Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"28256788\" in 1.174655019s" Nov 23 23:22:52.772600 containerd[1892]: time="2025-11-23T23:22:52.772594790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\"" Nov 23 23:22:52.773092 containerd[1892]: time="2025-11-23T23:22:52.773010852Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 23 23:22:53.963796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558396333.mount: Deactivated successfully. Nov 23 23:22:54.705277 containerd[1892]: time="2025-11-23T23:22:54.704732753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:54.708057 containerd[1892]: time="2025-11-23T23:22:54.708032791Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Nov 23 23:22:54.710182 containerd[1892]: time="2025-11-23T23:22:54.710161093Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:54.714414 containerd[1892]: time="2025-11-23T23:22:54.714379505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:54.714973 containerd[1892]: time="2025-11-23T23:22:54.714948260Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.941900335s" Nov 23 23:22:54.715057 containerd[1892]: time="2025-11-23T23:22:54.715045175Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 23 23:22:54.715557 containerd[1892]: time="2025-11-23T23:22:54.715534976Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 23 23:22:54.767280 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 23 23:22:54.769010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:22:54.926964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:22:54.932487 (kubelet)[2745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:22:54.956584 kubelet[2745]: E1123 23:22:54.956499 2745 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:22:54.959061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:22:54.959276 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:22:54.959675 systemd[1]: kubelet.service: Consumed 100ms CPU time, 104.9M memory peak. Nov 23 23:22:55.565270 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Nov 23 23:22:55.567046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683635491.mount: Deactivated successfully. Nov 23 23:22:55.583410 containerd[1892]: time="2025-11-23T23:22:55.583371882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:22:55.585777 containerd[1892]: time="2025-11-23T23:22:55.585751025Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Nov 23 23:22:55.587880 containerd[1892]: time="2025-11-23T23:22:55.587854246Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:22:55.593924 containerd[1892]: time="2025-11-23T23:22:55.593519306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:22:55.593987 containerd[1892]: time="2025-11-23T23:22:55.593925968Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 878.366143ms" Nov 23 23:22:55.593987 containerd[1892]: time="2025-11-23T23:22:55.593948032Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 23 23:22:55.594567 containerd[1892]: time="2025-11-23T23:22:55.594542148Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 23 23:22:56.220330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount386259990.mount: Deactivated successfully. Nov 23 23:22:57.318903 update_engine[1874]: I20251123 23:22:57.318419 1874 update_attempter.cc:509] Updating boot flags... Nov 23 23:22:59.012320 containerd[1892]: time="2025-11-23T23:22:59.012271988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:59.015109 containerd[1892]: time="2025-11-23T23:22:59.015081809Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Nov 23 23:22:59.017451 containerd[1892]: time="2025-11-23T23:22:59.017427022Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:59.021514 containerd[1892]: time="2025-11-23T23:22:59.021473565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:59.022228 containerd[1892]: time="2025-11-23T23:22:59.022057240Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.427489339s" Nov 23 23:22:59.022228 containerd[1892]: time="2025-11-23T23:22:59.022081553Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 23 23:23:01.621286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:23:01.621729 systemd[1]: kubelet.service: Consumed 100ms CPU time, 104.9M memory peak. Nov 23 23:23:01.623510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:23:01.644863 systemd[1]: Reload requested from client PID 2954 ('systemctl') (unit session-9.scope)... Nov 23 23:23:01.644874 systemd[1]: Reloading... Nov 23 23:23:01.741278 zram_generator::config[3001]: No configuration found. Nov 23 23:23:01.886968 systemd[1]: Reloading finished in 241 ms. Nov 23 23:23:01.937006 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:23:01.939761 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:23:01.939925 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:23:01.939959 systemd[1]: kubelet.service: Consumed 77ms CPU time, 95M memory peak. Nov 23 23:23:01.940957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:23:02.142274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:23:02.146050 (kubelet)[3070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:23:02.170899 kubelet[3070]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:23:02.170899 kubelet[3070]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:23:02.170899 kubelet[3070]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:23:02.171123 kubelet[3070]: I1123 23:23:02.170945 3070 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:23:02.543603 kubelet[3070]: I1123 23:23:02.543562 3070 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 23 23:23:02.543603 kubelet[3070]: I1123 23:23:02.543590 3070 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:23:02.543890 kubelet[3070]: I1123 23:23:02.543871 3070 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 23:23:02.563076 kubelet[3070]: E1123 23:23:02.563050 3070 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 23 23:23:02.563790 kubelet[3070]: I1123 23:23:02.563776 3070 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:23:02.569642 kubelet[3070]: I1123 23:23:02.569627 3070 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:23:02.572530 kubelet[3070]: I1123 23:23:02.572515 3070 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:23:02.573756 kubelet[3070]: I1123 23:23:02.573728 3070 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:23:02.573937 kubelet[3070]: I1123 23:23:02.573827 3070 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.1-a-856cba2a05","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:23:02.574064 kubelet[3070]: I1123 23:23:02.574054 3070 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:23:02.574110 kubelet[3070]: I1123 23:23:02.574103 3070 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 23:23:02.574897 kubelet[3070]: I1123 23:23:02.574882 3070 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:23:02.578664 kubelet[3070]: I1123 23:23:02.578648 3070 kubelet.go:480] "Attempting to sync node with API server" Nov 23 23:23:02.578746 kubelet[3070]: I1123 23:23:02.578738 3070 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:23:02.578802 kubelet[3070]: I1123 23:23:02.578796 3070 kubelet.go:386] "Adding apiserver pod source" Nov 23 23:23:02.579804 kubelet[3070]: I1123 23:23:02.579789 3070 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:23:02.582944 kubelet[3070]: I1123 23:23:02.582643 3070 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:23:02.582992 kubelet[3070]: I1123 23:23:02.582983 3070 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 23:23:02.583039 kubelet[3070]: W1123 23:23:02.583024 3070 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 23:23:02.584768 kubelet[3070]: I1123 23:23:02.584746 3070 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:23:02.584822 kubelet[3070]: I1123 23:23:02.584778 3070 server.go:1289] "Started kubelet" Nov 23 23:23:02.584934 kubelet[3070]: E1123 23:23:02.584890 3070 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.1-a-856cba2a05&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 23 23:23:02.586730 kubelet[3070]: E1123 23:23:02.586267 3070 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 23:23:02.586730 kubelet[3070]: I1123 23:23:02.586317 3070 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:23:02.586730 kubelet[3070]: I1123 23:23:02.586599 3070 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:23:02.590863 kubelet[3070]: I1123 23:23:02.590846 3070 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:23:02.593152 kubelet[3070]: E1123 23:23:02.592174 3070 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.1-a-856cba2a05.187ac64014009a0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.1-a-856cba2a05,UID:ci-4459.2.1-a-856cba2a05,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.1-a-856cba2a05,},FirstTimestamp:2025-11-23 23:23:02.584760845 +0000 UTC m=+0.435908015,LastTimestamp:2025-11-23 23:23:02.584760845 +0000 UTC m=+0.435908015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.1-a-856cba2a05,}" Nov 23 23:23:02.594258 kubelet[3070]: I1123 23:23:02.594057 3070 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:23:02.594453 kubelet[3070]: I1123 23:23:02.594426 3070 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:23:02.594826 kubelet[3070]: I1123 23:23:02.594812 3070 server.go:317] "Adding debug handlers to kubelet server" Nov 23 23:23:02.595534 kubelet[3070]: I1123 23:23:02.595512 3070 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:23:02.596270 kubelet[3070]: E1123 23:23:02.595641 3070 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.1-a-856cba2a05\" not found" Nov 23 23:23:02.596270 kubelet[3070]: I1123 23:23:02.595692 3070 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:23:02.596270 kubelet[3070]: I1123 23:23:02.595728 3070 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:23:02.596270 kubelet[3070]: E1123 23:23:02.595933 3070 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 23:23:02.596270 kubelet[3070]: E1123 23:23:02.595975 3070 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-a-856cba2a05?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="200ms" Nov 23 23:23:02.596766 kubelet[3070]: I1123 23:23:02.596748 3070 factory.go:223] Registration of the systemd container factory successfully Nov 23 23:23:02.597418 kubelet[3070]: E1123 23:23:02.597087 3070 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:23:02.597418 kubelet[3070]: I1123 23:23:02.597168 3070 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:23:02.598365 kubelet[3070]: I1123 23:23:02.598350 3070 factory.go:223] Registration of the containerd container factory successfully Nov 23 23:23:02.618610 kubelet[3070]: I1123 23:23:02.618590 3070 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:23:02.618610 kubelet[3070]: I1123 23:23:02.618605 3070 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:23:02.618694 kubelet[3070]: I1123 23:23:02.618619 3070 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:23:02.696061 kubelet[3070]: E1123 23:23:02.696037 3070 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.1-a-856cba2a05\" not found" Nov 23 23:23:02.722545 kubelet[3070]: I1123 23:23:02.722516 3070 policy_none.go:49] "None policy: Start" Nov 23 23:23:02.722545 kubelet[3070]: I1123 23:23:02.722541 3070 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:23:02.722545 kubelet[3070]: I1123 23:23:02.722553 3070 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:23:02.730284 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 23:23:02.740764 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 23:23:02.750546 kubelet[3070]: I1123 23:23:02.750521 3070 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 23 23:23:02.751608 kubelet[3070]: I1123 23:23:02.751543 3070 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 23 23:23:02.751608 kubelet[3070]: I1123 23:23:02.751563 3070 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 23 23:23:02.751608 kubelet[3070]: I1123 23:23:02.751580 3070 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:23:02.751608 kubelet[3070]: I1123 23:23:02.751584 3070 kubelet.go:2436] "Starting kubelet main sync loop" Nov 23 23:23:02.751742 kubelet[3070]: E1123 23:23:02.751729 3070 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:23:02.751907 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 23:23:02.752624 kubelet[3070]: E1123 23:23:02.752563 3070 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 23 23:23:02.754117 kubelet[3070]: E1123 23:23:02.754035 3070 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 23:23:02.754176 kubelet[3070]: I1123 23:23:02.754166 3070 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:23:02.754195 kubelet[3070]: I1123 23:23:02.754175 3070 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:23:02.754901 kubelet[3070]: I1123 23:23:02.754889 3070 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:23:02.756467 kubelet[3070]: E1123 23:23:02.756452 3070 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:23:02.756567 kubelet[3070]: E1123 23:23:02.756551 3070 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.1-a-856cba2a05\" not found" Nov 23 23:23:02.797036 kubelet[3070]: E1123 23:23:02.796939 3070 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-a-856cba2a05?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="400ms" Nov 23 23:23:02.855463 kubelet[3070]: I1123 23:23:02.855428 3070 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:02.857096 kubelet[3070]: E1123 23:23:02.855766 3070 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:02.863621 systemd[1]: Created slice kubepods-burstable-podec83057adb0507805b29be263e9a8352.slice - libcontainer container kubepods-burstable-podec83057adb0507805b29be263e9a8352.slice. Nov 23 23:23:02.868770 kubelet[3070]: E1123 23:23:02.868751 3070 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-856cba2a05\" not found" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:02.874152 systemd[1]: Created slice kubepods-burstable-pod5bb79a213347d5807f2fc7e56da8bda8.slice - libcontainer container kubepods-burstable-pod5bb79a213347d5807f2fc7e56da8bda8.slice. Nov 23 23:23:02.875705 kubelet[3070]: E1123 23:23:02.875588 3070 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-856cba2a05\" not found" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:02.886437 systemd[1]: Created slice kubepods-burstable-pod1b591ee48371b4cd9a3423fd7df1f769.slice - libcontainer container kubepods-burstable-pod1b591ee48371b4cd9a3423fd7df1f769.slice. Nov 23 23:23:02.887745 kubelet[3070]: E1123 23:23:02.887655 3070 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-856cba2a05\" not found" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:02.897924 kubelet[3070]: I1123 23:23:02.897905 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec83057adb0507805b29be263e9a8352-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.1-a-856cba2a05\" (UID: \"ec83057adb0507805b29be263e9a8352\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:02.897924 kubelet[3070]: I1123 23:23:02.897943 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bb79a213347d5807f2fc7e56da8bda8-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.1-a-856cba2a05\" (UID: \"5bb79a213347d5807f2fc7e56da8bda8\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:02.897924 kubelet[3070]: I1123 23:23:02.897956 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec83057adb0507805b29be263e9a8352-ca-certs\") pod \"kube-apiserver-ci-4459.2.1-a-856cba2a05\" (UID: \"ec83057adb0507805b29be263e9a8352\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:02.897924 kubelet[3070]: I1123 23:23:02.897967 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bb79a213347d5807f2fc7e56da8bda8-ca-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-856cba2a05\" (UID: \"5bb79a213347d5807f2fc7e56da8bda8\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:02.897924 kubelet[3070]: I1123 23:23:02.897978 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bb79a213347d5807f2fc7e56da8bda8-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.1-a-856cba2a05\" (UID: \"5bb79a213347d5807f2fc7e56da8bda8\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:02.898136 kubelet[3070]: I1123 23:23:02.898021 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bb79a213347d5807f2fc7e56da8bda8-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-856cba2a05\" (UID: \"5bb79a213347d5807f2fc7e56da8bda8\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:02.898136 kubelet[3070]: I1123 23:23:02.898045 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bb79a213347d5807f2fc7e56da8bda8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.1-a-856cba2a05\" (UID: \"5bb79a213347d5807f2fc7e56da8bda8\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:02.898136 kubelet[3070]: I1123 23:23:02.898056 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b591ee48371b4cd9a3423fd7df1f769-kubeconfig\") pod \"kube-scheduler-ci-4459.2.1-a-856cba2a05\" (UID: \"1b591ee48371b4cd9a3423fd7df1f769\") " pod="kube-system/kube-scheduler-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:02.898136 kubelet[3070]: I1123 23:23:02.898066 3070 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec83057adb0507805b29be263e9a8352-k8s-certs\") pod \"kube-apiserver-ci-4459.2.1-a-856cba2a05\" (UID: \"ec83057adb0507805b29be263e9a8352\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:03.058203 kubelet[3070]: I1123 23:23:03.057782 3070 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:03.058480 kubelet[3070]: E1123 23:23:03.058362 3070 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:03.170108 containerd[1892]: time="2025-11-23T23:23:03.170038750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.1-a-856cba2a05,Uid:ec83057adb0507805b29be263e9a8352,Namespace:kube-system,Attempt:0,}" Nov 23 23:23:03.176784 containerd[1892]: time="2025-11-23T23:23:03.176571001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.1-a-856cba2a05,Uid:5bb79a213347d5807f2fc7e56da8bda8,Namespace:kube-system,Attempt:0,}" Nov 23 23:23:03.192968 containerd[1892]: time="2025-11-23T23:23:03.192939449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.1-a-856cba2a05,Uid:1b591ee48371b4cd9a3423fd7df1f769,Namespace:kube-system,Attempt:0,}" Nov 23 23:23:03.197580 kubelet[3070]: E1123 23:23:03.197553 3070 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-a-856cba2a05?timeout=10s\": dial tcp 10.200.20.43:6443: connect: connection refused" interval="800ms" Nov 23 23:23:03.236408 containerd[1892]: time="2025-11-23T23:23:03.236314043Z" level=info msg="connecting to shim 5c7089a7845353966afccf0f53c9455fb45c969b8bc58d682f389e1f5b8e9e36" address="unix:///run/containerd/s/feb2c659102ef26e542bb3eef1fc79630d843676a6183c70699ecd1b5eec3956" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:03.237050 containerd[1892]: time="2025-11-23T23:23:03.237027830Z" level=info msg="connecting to shim 6d9703bc19d5b061221fe8203a58865832e958665102bb69eb84bfe8564f7260" address="unix:///run/containerd/s/6d73eea68685ebb9aae383f64c83a3c36ae9dcd61809cdef198dbdc5ac75f9ff" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:03.259418 systemd[1]: Started cri-containerd-6d9703bc19d5b061221fe8203a58865832e958665102bb69eb84bfe8564f7260.scope - libcontainer container 6d9703bc19d5b061221fe8203a58865832e958665102bb69eb84bfe8564f7260. Nov 23 23:23:03.265469 systemd[1]: Started cri-containerd-5c7089a7845353966afccf0f53c9455fb45c969b8bc58d682f389e1f5b8e9e36.scope - libcontainer container 5c7089a7845353966afccf0f53c9455fb45c969b8bc58d682f389e1f5b8e9e36. Nov 23 23:23:03.266611 containerd[1892]: time="2025-11-23T23:23:03.266514277Z" level=info msg="connecting to shim 0f6d61001369919d7473a71945c554285faab5d1dd1b5c2f78901045ec071b12" address="unix:///run/containerd/s/fab937a06021af157f3568f6ee7747b19a626d0fe2b72f7533f54bc583e63b0f" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:03.291450 systemd[1]: Started cri-containerd-0f6d61001369919d7473a71945c554285faab5d1dd1b5c2f78901045ec071b12.scope - libcontainer container 0f6d61001369919d7473a71945c554285faab5d1dd1b5c2f78901045ec071b12. Nov 23 23:23:03.315132 containerd[1892]: time="2025-11-23T23:23:03.314720587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.1-a-856cba2a05,Uid:ec83057adb0507805b29be263e9a8352,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d9703bc19d5b061221fe8203a58865832e958665102bb69eb84bfe8564f7260\"" Nov 23 23:23:03.317489 containerd[1892]: time="2025-11-23T23:23:03.317461281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.1-a-856cba2a05,Uid:5bb79a213347d5807f2fc7e56da8bda8,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c7089a7845353966afccf0f53c9455fb45c969b8bc58d682f389e1f5b8e9e36\"" Nov 23 23:23:03.323962 containerd[1892]: time="2025-11-23T23:23:03.323941578Z" level=info msg="CreateContainer within sandbox \"6d9703bc19d5b061221fe8203a58865832e958665102bb69eb84bfe8564f7260\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 23:23:03.329278 containerd[1892]: time="2025-11-23T23:23:03.328987637Z" level=info msg="CreateContainer within sandbox \"5c7089a7845353966afccf0f53c9455fb45c969b8bc58d682f389e1f5b8e9e36\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 23:23:03.333276 containerd[1892]: time="2025-11-23T23:23:03.333169753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.1-a-856cba2a05,Uid:1b591ee48371b4cd9a3423fd7df1f769,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f6d61001369919d7473a71945c554285faab5d1dd1b5c2f78901045ec071b12\"" Nov 23 23:23:03.338897 containerd[1892]: time="2025-11-23T23:23:03.338858660Z" level=info msg="CreateContainer within sandbox \"0f6d61001369919d7473a71945c554285faab5d1dd1b5c2f78901045ec071b12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 23:23:03.351946 containerd[1892]: time="2025-11-23T23:23:03.351912185Z" level=info msg="Container 40e40d744f566185cc084652fd6da0f457959f7c4898101da3006ed107182306: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:03.357125 containerd[1892]: time="2025-11-23T23:23:03.357090953Z" level=info msg="Container 30e43997c84c8a180a32644e170e34d9852e183df6d42e67be0348d6c70b4578: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:03.362052 containerd[1892]: time="2025-11-23T23:23:03.362017984Z" level=info msg="Container 133161841831f0354934493a3bc463eb12cc29ab91704dc72ef6d20ceea6c05d: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:03.387101 containerd[1892]: time="2025-11-23T23:23:03.387068602Z" level=info msg="CreateContainer within sandbox \"6d9703bc19d5b061221fe8203a58865832e958665102bb69eb84bfe8564f7260\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"40e40d744f566185cc084652fd6da0f457959f7c4898101da3006ed107182306\"" Nov 23 23:23:03.387727 containerd[1892]: time="2025-11-23T23:23:03.387700258Z" level=info msg="StartContainer for \"40e40d744f566185cc084652fd6da0f457959f7c4898101da3006ed107182306\"" Nov 23 23:23:03.389601 containerd[1892]: time="2025-11-23T23:23:03.389460635Z" level=info msg="CreateContainer within sandbox \"5c7089a7845353966afccf0f53c9455fb45c969b8bc58d682f389e1f5b8e9e36\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"30e43997c84c8a180a32644e170e34d9852e183df6d42e67be0348d6c70b4578\"" Nov 23 23:23:03.389847 containerd[1892]: time="2025-11-23T23:23:03.389825769Z" level=info msg="connecting to shim 40e40d744f566185cc084652fd6da0f457959f7c4898101da3006ed107182306" address="unix:///run/containerd/s/6d73eea68685ebb9aae383f64c83a3c36ae9dcd61809cdef198dbdc5ac75f9ff" protocol=ttrpc version=3 Nov 23 23:23:03.390364 containerd[1892]: time="2025-11-23T23:23:03.390345204Z" level=info msg="StartContainer for \"30e43997c84c8a180a32644e170e34d9852e183df6d42e67be0348d6c70b4578\"" Nov 23 23:23:03.391739 containerd[1892]: time="2025-11-23T23:23:03.391715263Z" level=info msg="connecting to shim 30e43997c84c8a180a32644e170e34d9852e183df6d42e67be0348d6c70b4578" address="unix:///run/containerd/s/feb2c659102ef26e542bb3eef1fc79630d843676a6183c70699ecd1b5eec3956" protocol=ttrpc version=3 Nov 23 23:23:03.391893 containerd[1892]: time="2025-11-23T23:23:03.390460424Z" level=info msg="CreateContainer within sandbox \"0f6d61001369919d7473a71945c554285faab5d1dd1b5c2f78901045ec071b12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"133161841831f0354934493a3bc463eb12cc29ab91704dc72ef6d20ceea6c05d\"" Nov 23 23:23:03.392543 containerd[1892]: time="2025-11-23T23:23:03.392516589Z" level=info msg="StartContainer for \"133161841831f0354934493a3bc463eb12cc29ab91704dc72ef6d20ceea6c05d\"" Nov 23 23:23:03.393836 containerd[1892]: time="2025-11-23T23:23:03.393806693Z" level=info msg="connecting to shim 133161841831f0354934493a3bc463eb12cc29ab91704dc72ef6d20ceea6c05d" address="unix:///run/containerd/s/fab937a06021af157f3568f6ee7747b19a626d0fe2b72f7533f54bc583e63b0f" protocol=ttrpc version=3 Nov 23 23:23:03.408401 systemd[1]: Started cri-containerd-40e40d744f566185cc084652fd6da0f457959f7c4898101da3006ed107182306.scope - libcontainer container 40e40d744f566185cc084652fd6da0f457959f7c4898101da3006ed107182306. Nov 23 23:23:03.410936 systemd[1]: Started cri-containerd-30e43997c84c8a180a32644e170e34d9852e183df6d42e67be0348d6c70b4578.scope - libcontainer container 30e43997c84c8a180a32644e170e34d9852e183df6d42e67be0348d6c70b4578. Nov 23 23:23:03.426372 systemd[1]: Started cri-containerd-133161841831f0354934493a3bc463eb12cc29ab91704dc72ef6d20ceea6c05d.scope - libcontainer container 133161841831f0354934493a3bc463eb12cc29ab91704dc72ef6d20ceea6c05d. Nov 23 23:23:03.462559 kubelet[3070]: I1123 23:23:03.462525 3070 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:03.462825 kubelet[3070]: E1123 23:23:03.462800 3070 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.43:6443/api/v1/nodes\": dial tcp 10.200.20.43:6443: connect: connection refused" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:03.479263 containerd[1892]: time="2025-11-23T23:23:03.477599173Z" level=info msg="StartContainer for \"40e40d744f566185cc084652fd6da0f457959f7c4898101da3006ed107182306\" returns successfully" Nov 23 23:23:03.479263 containerd[1892]: time="2025-11-23T23:23:03.477763643Z" level=info msg="StartContainer for \"30e43997c84c8a180a32644e170e34d9852e183df6d42e67be0348d6c70b4578\" returns successfully" Nov 23 23:23:03.485764 containerd[1892]: time="2025-11-23T23:23:03.485727714Z" level=info msg="StartContainer for \"133161841831f0354934493a3bc463eb12cc29ab91704dc72ef6d20ceea6c05d\" returns successfully" Nov 23 23:23:03.760723 kubelet[3070]: E1123 23:23:03.760627 3070 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-856cba2a05\" not found" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:03.762187 kubelet[3070]: E1123 23:23:03.762166 3070 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-856cba2a05\" not found" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:03.766198 kubelet[3070]: E1123 23:23:03.766171 3070 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-856cba2a05\" not found" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:04.265379 kubelet[3070]: I1123 23:23:04.265349 3070 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:04.768713 kubelet[3070]: E1123 23:23:04.768684 3070 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-856cba2a05\" not found" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:04.769226 kubelet[3070]: E1123 23:23:04.769209 3070 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-856cba2a05\" not found" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:04.816578 kubelet[3070]: E1123 23:23:04.816546 3070 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.1-a-856cba2a05\" not found" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:04.987958 kubelet[3070]: I1123 23:23:04.987790 3070 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:04.996265 kubelet[3070]: I1123 23:23:04.996235 3070 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:05.006255 kubelet[3070]: E1123 23:23:05.006146 3070 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.1-a-856cba2a05\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:05.006255 kubelet[3070]: I1123 23:23:05.006165 3070 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:05.008153 kubelet[3070]: E1123 23:23:05.008059 3070 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-a-856cba2a05\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:05.008153 kubelet[3070]: I1123 23:23:05.008078 3070 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:05.009457 kubelet[3070]: E1123 23:23:05.009433 3070 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.1-a-856cba2a05\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:05.587500 kubelet[3070]: I1123 23:23:05.587454 3070 apiserver.go:52] "Watching apiserver" Nov 23 23:23:05.596797 kubelet[3070]: I1123 23:23:05.596765 3070 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:23:07.241843 systemd[1]: Reload requested from client PID 3344 ('systemctl') (unit session-9.scope)... Nov 23 23:23:07.241859 systemd[1]: Reloading... Nov 23 23:23:07.334348 zram_generator::config[3394]: No configuration found. Nov 23 23:23:07.497956 systemd[1]: Reloading finished in 255 ms. Nov 23 23:23:07.527068 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:23:07.541513 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:23:07.541693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:23:07.541735 systemd[1]: kubelet.service: Consumed 672ms CPU time, 126M memory peak. Nov 23 23:23:07.543496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:23:07.997083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:23:07.999950 (kubelet)[3455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:23:08.031432 kubelet[3455]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:23:08.031654 kubelet[3455]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:23:08.032280 kubelet[3455]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:23:08.032280 kubelet[3455]: I1123 23:23:08.031737 3455 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:23:08.037396 kubelet[3455]: I1123 23:23:08.037379 3455 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 23 23:23:08.037564 kubelet[3455]: I1123 23:23:08.037553 3455 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:23:08.041319 kubelet[3455]: I1123 23:23:08.041302 3455 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 23:23:08.042661 kubelet[3455]: I1123 23:23:08.042640 3455 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 23 23:23:08.045947 kubelet[3455]: I1123 23:23:08.045907 3455 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:23:08.048947 kubelet[3455]: I1123 23:23:08.048932 3455 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:23:08.051508 kubelet[3455]: I1123 23:23:08.051491 3455 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:23:08.051750 kubelet[3455]: I1123 23:23:08.051731 3455 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:23:08.052287 kubelet[3455]: I1123 23:23:08.051810 3455 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.1-a-856cba2a05","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:23:08.052530 kubelet[3455]: I1123 23:23:08.052415 3455 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:23:08.052771 kubelet[3455]: I1123 23:23:08.052751 3455 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 23:23:08.053100 kubelet[3455]: I1123 23:23:08.053080 3455 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:23:08.053564 kubelet[3455]: I1123 23:23:08.053550 3455 kubelet.go:480] "Attempting to sync node with API server" Nov 23 23:23:08.053836 kubelet[3455]: I1123 23:23:08.053819 3455 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:23:08.054671 kubelet[3455]: I1123 23:23:08.054012 3455 kubelet.go:386] "Adding apiserver pod source" Nov 23 23:23:08.054839 kubelet[3455]: I1123 23:23:08.054813 3455 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:23:08.057444 kubelet[3455]: I1123 23:23:08.057028 3455 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:23:08.057517 kubelet[3455]: I1123 23:23:08.057477 3455 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 23:23:08.058929 kubelet[3455]: I1123 23:23:08.058909 3455 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:23:08.058990 kubelet[3455]: I1123 23:23:08.058965 3455 server.go:1289] "Started kubelet" Nov 23 23:23:08.064342 kubelet[3455]: I1123 23:23:08.064315 3455 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:23:08.069159 kubelet[3455]: I1123 23:23:08.068820 3455 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:23:08.070794 kubelet[3455]: I1123 23:23:08.070158 3455 server.go:317] "Adding debug handlers to kubelet server" Nov 23 23:23:08.072521 kubelet[3455]: I1123 23:23:08.072335 3455 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:23:08.074803 kubelet[3455]: I1123 23:23:08.074313 3455 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:23:08.075316 kubelet[3455]: E1123 23:23:08.075133 3455 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.1-a-856cba2a05\" not found" Nov 23 23:23:08.076174 kubelet[3455]: I1123 23:23:08.075849 3455 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:23:08.076366 kubelet[3455]: I1123 23:23:08.076355 3455 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:23:08.081694 kubelet[3455]: I1123 23:23:08.081605 3455 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:23:08.081892 kubelet[3455]: I1123 23:23:08.081875 3455 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:23:08.085884 kubelet[3455]: I1123 23:23:08.085856 3455 factory.go:223] Registration of the containerd container factory successfully Nov 23 23:23:08.085990 kubelet[3455]: I1123 23:23:08.085980 3455 factory.go:223] Registration of the systemd container factory successfully Nov 23 23:23:08.086134 kubelet[3455]: I1123 23:23:08.086117 3455 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:23:08.086254 kubelet[3455]: E1123 23:23:08.086221 3455 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:23:08.091737 kubelet[3455]: I1123 23:23:08.085978 3455 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 23 23:23:08.092547 kubelet[3455]: I1123 23:23:08.092533 3455 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 23 23:23:08.092628 kubelet[3455]: I1123 23:23:08.092619 3455 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 23 23:23:08.092677 kubelet[3455]: I1123 23:23:08.092670 3455 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:23:08.092716 kubelet[3455]: I1123 23:23:08.092710 3455 kubelet.go:2436] "Starting kubelet main sync loop" Nov 23 23:23:08.092784 kubelet[3455]: E1123 23:23:08.092771 3455 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:23:08.121578 kubelet[3455]: I1123 23:23:08.121555 3455 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:23:08.121578 kubelet[3455]: I1123 23:23:08.121571 3455 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:23:08.121670 kubelet[3455]: I1123 23:23:08.121588 3455 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:23:08.121688 kubelet[3455]: I1123 23:23:08.121674 3455 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 23:23:08.121688 kubelet[3455]: I1123 23:23:08.121681 3455 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 23:23:08.121717 kubelet[3455]: I1123 23:23:08.121693 3455 policy_none.go:49] "None policy: Start" Nov 23 23:23:08.121717 kubelet[3455]: I1123 23:23:08.121701 3455 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:23:08.121717 kubelet[3455]: I1123 23:23:08.121707 3455 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:23:08.121778 kubelet[3455]: I1123 23:23:08.121760 3455 state_mem.go:75] "Updated machine memory state" Nov 23 23:23:08.124760 kubelet[3455]: E1123 23:23:08.124745 3455 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 23:23:08.125524 kubelet[3455]: I1123 23:23:08.125503 3455 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:23:08.125583 kubelet[3455]: I1123 23:23:08.125520 3455 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:23:08.125744 kubelet[3455]: I1123 23:23:08.125726 3455 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:23:08.127339 kubelet[3455]: E1123 23:23:08.127025 3455 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:23:08.194089 kubelet[3455]: I1123 23:23:08.194068 3455 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.194222 kubelet[3455]: I1123 23:23:08.194200 3455 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.194343 kubelet[3455]: I1123 23:23:08.194125 3455 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.201782 kubelet[3455]: I1123 23:23:08.201750 3455 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 23 23:23:08.206101 kubelet[3455]: I1123 23:23:08.206076 3455 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 23 23:23:08.206411 kubelet[3455]: I1123 23:23:08.206388 3455 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 23 23:23:08.228711 kubelet[3455]: I1123 23:23:08.228692 3455 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.240831 kubelet[3455]: I1123 23:23:08.240808 3455 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.241427 kubelet[3455]: I1123 23:23:08.240862 3455 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.377168 kubelet[3455]: I1123 23:23:08.376993 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bb79a213347d5807f2fc7e56da8bda8-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-856cba2a05\" (UID: \"5bb79a213347d5807f2fc7e56da8bda8\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.377168 kubelet[3455]: I1123 23:23:08.377025 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec83057adb0507805b29be263e9a8352-k8s-certs\") pod \"kube-apiserver-ci-4459.2.1-a-856cba2a05\" (UID: \"ec83057adb0507805b29be263e9a8352\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.377168 kubelet[3455]: I1123 23:23:08.377040 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec83057adb0507805b29be263e9a8352-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.1-a-856cba2a05\" (UID: \"ec83057adb0507805b29be263e9a8352\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.377168 kubelet[3455]: I1123 23:23:08.377054 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bb79a213347d5807f2fc7e56da8bda8-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.1-a-856cba2a05\" (UID: \"5bb79a213347d5807f2fc7e56da8bda8\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.377168 kubelet[3455]: I1123 23:23:08.377073 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bb79a213347d5807f2fc7e56da8bda8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.1-a-856cba2a05\" (UID: \"5bb79a213347d5807f2fc7e56da8bda8\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.377365 kubelet[3455]: I1123 23:23:08.377083 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b591ee48371b4cd9a3423fd7df1f769-kubeconfig\") pod \"kube-scheduler-ci-4459.2.1-a-856cba2a05\" (UID: \"1b591ee48371b4cd9a3423fd7df1f769\") " pod="kube-system/kube-scheduler-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.377365 kubelet[3455]: I1123 23:23:08.377094 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec83057adb0507805b29be263e9a8352-ca-certs\") pod \"kube-apiserver-ci-4459.2.1-a-856cba2a05\" (UID: \"ec83057adb0507805b29be263e9a8352\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.377365 kubelet[3455]: I1123 23:23:08.377102 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bb79a213347d5807f2fc7e56da8bda8-ca-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-856cba2a05\" (UID: \"5bb79a213347d5807f2fc7e56da8bda8\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:08.377365 kubelet[3455]: I1123 23:23:08.377111 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bb79a213347d5807f2fc7e56da8bda8-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.1-a-856cba2a05\" (UID: \"5bb79a213347d5807f2fc7e56da8bda8\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:09.056012 kubelet[3455]: I1123 23:23:09.055983 3455 apiserver.go:52] "Watching apiserver" Nov 23 23:23:09.076946 kubelet[3455]: I1123 23:23:09.076917 3455 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:23:09.108699 kubelet[3455]: I1123 23:23:09.108234 3455 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:09.127407 kubelet[3455]: I1123 23:23:09.127391 3455 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 23 23:23:09.127774 kubelet[3455]: E1123 23:23:09.127745 3455 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-a-856cba2a05\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.1-a-856cba2a05" Nov 23 23:23:09.128754 kubelet[3455]: I1123 23:23:09.128690 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.1-a-856cba2a05" podStartSLOduration=1.128670528 podStartE2EDuration="1.128670528s" podCreationTimestamp="2025-11-23 23:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:23:09.127670912 +0000 UTC m=+1.124711029" watchObservedRunningTime="2025-11-23 23:23:09.128670528 +0000 UTC m=+1.125710645" Nov 23 23:23:09.157889 kubelet[3455]: I1123 23:23:09.157860 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-856cba2a05" podStartSLOduration=1.157850474 podStartE2EDuration="1.157850474s" podCreationTimestamp="2025-11-23 23:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:23:09.145082666 +0000 UTC m=+1.142122791" watchObservedRunningTime="2025-11-23 23:23:09.157850474 +0000 UTC m=+1.154890599" Nov 23 23:23:09.158510 kubelet[3455]: I1123 23:23:09.158462 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.1-a-856cba2a05" podStartSLOduration=1.1584522210000001 podStartE2EDuration="1.158452221s" podCreationTimestamp="2025-11-23 23:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:23:09.158259199 +0000 UTC m=+1.155299316" watchObservedRunningTime="2025-11-23 23:23:09.158452221 +0000 UTC m=+1.155492346" Nov 23 23:23:12.036661 kubelet[3455]: I1123 23:23:12.036449 3455 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 23:23:12.037437 containerd[1892]: time="2025-11-23T23:23:12.037116281Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 23:23:12.037644 kubelet[3455]: I1123 23:23:12.037284 3455 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 23:23:13.002942 systemd[1]: Created slice kubepods-besteffort-podad7de006_f480_4b2f_89b6_6a4aa652a043.slice - libcontainer container kubepods-besteffort-podad7de006_f480_4b2f_89b6_6a4aa652a043.slice. Nov 23 23:23:13.008556 kubelet[3455]: I1123 23:23:13.008454 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad7de006-f480-4b2f-89b6-6a4aa652a043-xtables-lock\") pod \"kube-proxy-tt74g\" (UID: \"ad7de006-f480-4b2f-89b6-6a4aa652a043\") " pod="kube-system/kube-proxy-tt74g" Nov 23 23:23:13.008556 kubelet[3455]: I1123 23:23:13.008481 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad7de006-f480-4b2f-89b6-6a4aa652a043-lib-modules\") pod \"kube-proxy-tt74g\" (UID: \"ad7de006-f480-4b2f-89b6-6a4aa652a043\") " pod="kube-system/kube-proxy-tt74g" Nov 23 23:23:13.008556 kubelet[3455]: I1123 23:23:13.008498 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ad7de006-f480-4b2f-89b6-6a4aa652a043-kube-proxy\") pod \"kube-proxy-tt74g\" (UID: \"ad7de006-f480-4b2f-89b6-6a4aa652a043\") " pod="kube-system/kube-proxy-tt74g" Nov 23 23:23:13.008556 kubelet[3455]: I1123 23:23:13.008507 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6r97\" (UniqueName: \"kubernetes.io/projected/ad7de006-f480-4b2f-89b6-6a4aa652a043-kube-api-access-r6r97\") pod \"kube-proxy-tt74g\" (UID: \"ad7de006-f480-4b2f-89b6-6a4aa652a043\") " pod="kube-system/kube-proxy-tt74g" Nov 23 23:23:13.234622 systemd[1]: Created slice kubepods-besteffort-podc04989b2_b973_4d6c_886d_d90a69b516b0.slice - libcontainer container kubepods-besteffort-podc04989b2_b973_4d6c_886d_d90a69b516b0.slice. Nov 23 23:23:13.310335 kubelet[3455]: I1123 23:23:13.310236 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c04989b2-b973-4d6c-886d-d90a69b516b0-var-lib-calico\") pod \"tigera-operator-7dcd859c48-9nhps\" (UID: \"c04989b2-b973-4d6c-886d-d90a69b516b0\") " pod="tigera-operator/tigera-operator-7dcd859c48-9nhps" Nov 23 23:23:13.310335 kubelet[3455]: I1123 23:23:13.310338 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmb7f\" (UniqueName: \"kubernetes.io/projected/c04989b2-b973-4d6c-886d-d90a69b516b0-kube-api-access-qmb7f\") pod \"tigera-operator-7dcd859c48-9nhps\" (UID: \"c04989b2-b973-4d6c-886d-d90a69b516b0\") " pod="tigera-operator/tigera-operator-7dcd859c48-9nhps" Nov 23 23:23:13.311775 containerd[1892]: time="2025-11-23T23:23:13.311743073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tt74g,Uid:ad7de006-f480-4b2f-89b6-6a4aa652a043,Namespace:kube-system,Attempt:0,}" Nov 23 23:23:13.340631 containerd[1892]: time="2025-11-23T23:23:13.340598857Z" level=info msg="connecting to shim d129ade0261b72644d8aafc6727c3c35de1d2d113284b9808da49d1f6a728d8f" address="unix:///run/containerd/s/e56331272fd7a5c20d011acb46b0e8315e5d8aabab9932ccc8b32cedec78d90a" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:13.361366 systemd[1]: Started cri-containerd-d129ade0261b72644d8aafc6727c3c35de1d2d113284b9808da49d1f6a728d8f.scope - libcontainer container d129ade0261b72644d8aafc6727c3c35de1d2d113284b9808da49d1f6a728d8f. Nov 23 23:23:13.382058 containerd[1892]: time="2025-11-23T23:23:13.382030852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tt74g,Uid:ad7de006-f480-4b2f-89b6-6a4aa652a043,Namespace:kube-system,Attempt:0,} returns sandbox id \"d129ade0261b72644d8aafc6727c3c35de1d2d113284b9808da49d1f6a728d8f\"" Nov 23 23:23:13.389067 containerd[1892]: time="2025-11-23T23:23:13.389021855Z" level=info msg="CreateContainer within sandbox \"d129ade0261b72644d8aafc6727c3c35de1d2d113284b9808da49d1f6a728d8f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 23:23:13.405731 containerd[1892]: time="2025-11-23T23:23:13.405692673Z" level=info msg="Container 1122506f93c04f72e63f4195b574fe4d6626d2d49497f646cb458281325ea8e3: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:13.422905 containerd[1892]: time="2025-11-23T23:23:13.422873916Z" level=info msg="CreateContainer within sandbox \"d129ade0261b72644d8aafc6727c3c35de1d2d113284b9808da49d1f6a728d8f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1122506f93c04f72e63f4195b574fe4d6626d2d49497f646cb458281325ea8e3\"" Nov 23 23:23:13.424218 containerd[1892]: time="2025-11-23T23:23:13.423877875Z" level=info msg="StartContainer for \"1122506f93c04f72e63f4195b574fe4d6626d2d49497f646cb458281325ea8e3\"" Nov 23 23:23:13.425130 containerd[1892]: time="2025-11-23T23:23:13.425104578Z" level=info msg="connecting to shim 1122506f93c04f72e63f4195b574fe4d6626d2d49497f646cb458281325ea8e3" address="unix:///run/containerd/s/e56331272fd7a5c20d011acb46b0e8315e5d8aabab9932ccc8b32cedec78d90a" protocol=ttrpc version=3 Nov 23 23:23:13.445358 systemd[1]: Started cri-containerd-1122506f93c04f72e63f4195b574fe4d6626d2d49497f646cb458281325ea8e3.scope - libcontainer container 1122506f93c04f72e63f4195b574fe4d6626d2d49497f646cb458281325ea8e3. Nov 23 23:23:13.494073 containerd[1892]: time="2025-11-23T23:23:13.494044459Z" level=info msg="StartContainer for \"1122506f93c04f72e63f4195b574fe4d6626d2d49497f646cb458281325ea8e3\" returns successfully" Nov 23 23:23:13.537921 containerd[1892]: time="2025-11-23T23:23:13.537676898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9nhps,Uid:c04989b2-b973-4d6c-886d-d90a69b516b0,Namespace:tigera-operator,Attempt:0,}" Nov 23 23:23:13.567079 containerd[1892]: time="2025-11-23T23:23:13.566581060Z" level=info msg="connecting to shim b66ac7a2d15c7b648809d646acdce61d49744ba41f878eb62ff37180f11f20e7" address="unix:///run/containerd/s/a43a51a75b5ff8dc2549abca9c257452b2d1d86e1c00ebc2fec1f5227c871aff" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:13.594419 systemd[1]: Started cri-containerd-b66ac7a2d15c7b648809d646acdce61d49744ba41f878eb62ff37180f11f20e7.scope - libcontainer container b66ac7a2d15c7b648809d646acdce61d49744ba41f878eb62ff37180f11f20e7. Nov 23 23:23:13.629115 containerd[1892]: time="2025-11-23T23:23:13.629051498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-9nhps,Uid:c04989b2-b973-4d6c-886d-d90a69b516b0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b66ac7a2d15c7b648809d646acdce61d49744ba41f878eb62ff37180f11f20e7\"" Nov 23 23:23:13.631007 containerd[1892]: time="2025-11-23T23:23:13.630944910Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 23:23:14.129796 kubelet[3455]: I1123 23:23:14.129473 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tt74g" podStartSLOduration=2.129459254 podStartE2EDuration="2.129459254s" podCreationTimestamp="2025-11-23 23:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:23:14.129026609 +0000 UTC m=+6.126066742" watchObservedRunningTime="2025-11-23 23:23:14.129459254 +0000 UTC m=+6.126499427" Nov 23 23:23:15.081596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3199749967.mount: Deactivated successfully. Nov 23 23:23:15.447334 containerd[1892]: time="2025-11-23T23:23:15.447174357Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:15.449489 containerd[1892]: time="2025-11-23T23:23:15.449462573Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 23:23:15.451898 containerd[1892]: time="2025-11-23T23:23:15.451872360Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:15.455522 containerd[1892]: time="2025-11-23T23:23:15.455494738Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:15.456288 containerd[1892]: time="2025-11-23T23:23:15.456264002Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.825276099s" Nov 23 23:23:15.456311 containerd[1892]: time="2025-11-23T23:23:15.456289075Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 23:23:15.462354 containerd[1892]: time="2025-11-23T23:23:15.462328544Z" level=info msg="CreateContainer within sandbox \"b66ac7a2d15c7b648809d646acdce61d49744ba41f878eb62ff37180f11f20e7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 23:23:15.478312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970041255.mount: Deactivated successfully. Nov 23 23:23:15.478696 containerd[1892]: time="2025-11-23T23:23:15.478665480Z" level=info msg="Container 02794f2bb611ae57662de0538ad45503edba2a64183facb3141ce59803b0f7c8: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:15.491995 containerd[1892]: time="2025-11-23T23:23:15.491935416Z" level=info msg="CreateContainer within sandbox \"b66ac7a2d15c7b648809d646acdce61d49744ba41f878eb62ff37180f11f20e7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"02794f2bb611ae57662de0538ad45503edba2a64183facb3141ce59803b0f7c8\"" Nov 23 23:23:15.493000 containerd[1892]: time="2025-11-23T23:23:15.492404039Z" level=info msg="StartContainer for \"02794f2bb611ae57662de0538ad45503edba2a64183facb3141ce59803b0f7c8\"" Nov 23 23:23:15.493118 containerd[1892]: time="2025-11-23T23:23:15.493098669Z" level=info msg="connecting to shim 02794f2bb611ae57662de0538ad45503edba2a64183facb3141ce59803b0f7c8" address="unix:///run/containerd/s/a43a51a75b5ff8dc2549abca9c257452b2d1d86e1c00ebc2fec1f5227c871aff" protocol=ttrpc version=3 Nov 23 23:23:15.507351 systemd[1]: Started cri-containerd-02794f2bb611ae57662de0538ad45503edba2a64183facb3141ce59803b0f7c8.scope - libcontainer container 02794f2bb611ae57662de0538ad45503edba2a64183facb3141ce59803b0f7c8. Nov 23 23:23:15.533288 containerd[1892]: time="2025-11-23T23:23:15.533269160Z" level=info msg="StartContainer for \"02794f2bb611ae57662de0538ad45503edba2a64183facb3141ce59803b0f7c8\" returns successfully" Nov 23 23:23:16.347160 kubelet[3455]: I1123 23:23:16.346995 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-9nhps" podStartSLOduration=1.520248021 podStartE2EDuration="3.346981693s" podCreationTimestamp="2025-11-23 23:23:13 +0000 UTC" firstStartedPulling="2025-11-23 23:23:13.630207895 +0000 UTC m=+5.627248012" lastFinishedPulling="2025-11-23 23:23:15.456941567 +0000 UTC m=+7.453981684" observedRunningTime="2025-11-23 23:23:16.132548746 +0000 UTC m=+8.129588863" watchObservedRunningTime="2025-11-23 23:23:16.346981693 +0000 UTC m=+8.344021810" Nov 23 23:23:20.622996 sudo[2370]: pam_unix(sudo:session): session closed for user root Nov 23 23:23:20.693916 sshd[2369]: Connection closed by 10.200.16.10 port 48964 Nov 23 23:23:20.694388 sshd-session[2366]: pam_unix(sshd:session): session closed for user core Nov 23 23:23:20.698619 systemd-logind[1871]: Session 9 logged out. Waiting for processes to exit. Nov 23 23:23:20.700721 systemd[1]: sshd@6-10.200.20.43:22-10.200.16.10:48964.service: Deactivated successfully. Nov 23 23:23:20.707014 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 23:23:20.707171 systemd[1]: session-9.scope: Consumed 3.256s CPU time, 222.3M memory peak. Nov 23 23:23:20.710608 systemd-logind[1871]: Removed session 9. Nov 23 23:23:26.543961 systemd[1]: Created slice kubepods-besteffort-pod984ac8e6_bfff_4a27_90f5_0ad59429bc15.slice - libcontainer container kubepods-besteffort-pod984ac8e6_bfff_4a27_90f5_0ad59429bc15.slice. Nov 23 23:23:26.598832 kubelet[3455]: I1123 23:23:26.598797 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/984ac8e6-bfff-4a27-90f5-0ad59429bc15-tigera-ca-bundle\") pod \"calico-typha-85dbd66bf4-9qjl7\" (UID: \"984ac8e6-bfff-4a27-90f5-0ad59429bc15\") " pod="calico-system/calico-typha-85dbd66bf4-9qjl7" Nov 23 23:23:26.598832 kubelet[3455]: I1123 23:23:26.598830 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/984ac8e6-bfff-4a27-90f5-0ad59429bc15-typha-certs\") pod \"calico-typha-85dbd66bf4-9qjl7\" (UID: \"984ac8e6-bfff-4a27-90f5-0ad59429bc15\") " pod="calico-system/calico-typha-85dbd66bf4-9qjl7" Nov 23 23:23:26.599147 kubelet[3455]: I1123 23:23:26.598842 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8tjq\" (UniqueName: \"kubernetes.io/projected/984ac8e6-bfff-4a27-90f5-0ad59429bc15-kube-api-access-f8tjq\") pod \"calico-typha-85dbd66bf4-9qjl7\" (UID: \"984ac8e6-bfff-4a27-90f5-0ad59429bc15\") " pod="calico-system/calico-typha-85dbd66bf4-9qjl7" Nov 23 23:23:26.716458 systemd[1]: Created slice kubepods-besteffort-pod00c4c812_1be2_4af8_8852_44ec1c568070.slice - libcontainer container kubepods-besteffort-pod00c4c812_1be2_4af8_8852_44ec1c568070.slice. Nov 23 23:23:26.800490 kubelet[3455]: I1123 23:23:26.800391 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/00c4c812-1be2-4af8-8852-44ec1c568070-cni-bin-dir\") pod \"calico-node-jvmbt\" (UID: \"00c4c812-1be2-4af8-8852-44ec1c568070\") " pod="calico-system/calico-node-jvmbt" Nov 23 23:23:26.800490 kubelet[3455]: I1123 23:23:26.800429 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/00c4c812-1be2-4af8-8852-44ec1c568070-cni-net-dir\") pod \"calico-node-jvmbt\" (UID: \"00c4c812-1be2-4af8-8852-44ec1c568070\") " pod="calico-system/calico-node-jvmbt" Nov 23 23:23:26.800490 kubelet[3455]: I1123 23:23:26.800441 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00c4c812-1be2-4af8-8852-44ec1c568070-lib-modules\") pod \"calico-node-jvmbt\" (UID: \"00c4c812-1be2-4af8-8852-44ec1c568070\") " pod="calico-system/calico-node-jvmbt" Nov 23 23:23:26.800490 kubelet[3455]: I1123 23:23:26.800451 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/00c4c812-1be2-4af8-8852-44ec1c568070-flexvol-driver-host\") pod \"calico-node-jvmbt\" (UID: \"00c4c812-1be2-4af8-8852-44ec1c568070\") " pod="calico-system/calico-node-jvmbt" Nov 23 23:23:26.800490 kubelet[3455]: I1123 23:23:26.800463 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/00c4c812-1be2-4af8-8852-44ec1c568070-policysync\") pod \"calico-node-jvmbt\" (UID: \"00c4c812-1be2-4af8-8852-44ec1c568070\") " pod="calico-system/calico-node-jvmbt" Nov 23 23:23:26.800653 kubelet[3455]: I1123 23:23:26.800471 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00c4c812-1be2-4af8-8852-44ec1c568070-tigera-ca-bundle\") pod \"calico-node-jvmbt\" (UID: \"00c4c812-1be2-4af8-8852-44ec1c568070\") " pod="calico-system/calico-node-jvmbt" Nov 23 23:23:26.800857 kubelet[3455]: I1123 23:23:26.800838 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00c4c812-1be2-4af8-8852-44ec1c568070-var-lib-calico\") pod \"calico-node-jvmbt\" (UID: \"00c4c812-1be2-4af8-8852-44ec1c568070\") " pod="calico-system/calico-node-jvmbt" Nov 23 23:23:26.800902 kubelet[3455]: I1123 23:23:26.800867 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00c4c812-1be2-4af8-8852-44ec1c568070-xtables-lock\") pod \"calico-node-jvmbt\" (UID: \"00c4c812-1be2-4af8-8852-44ec1c568070\") " pod="calico-system/calico-node-jvmbt" Nov 23 23:23:26.800902 kubelet[3455]: I1123 23:23:26.800892 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/00c4c812-1be2-4af8-8852-44ec1c568070-cni-log-dir\") pod \"calico-node-jvmbt\" (UID: \"00c4c812-1be2-4af8-8852-44ec1c568070\") " pod="calico-system/calico-node-jvmbt" Nov 23 23:23:26.800936 kubelet[3455]: I1123 23:23:26.800910 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/00c4c812-1be2-4af8-8852-44ec1c568070-node-certs\") pod \"calico-node-jvmbt\" (UID: \"00c4c812-1be2-4af8-8852-44ec1c568070\") " pod="calico-system/calico-node-jvmbt" Nov 23 23:23:26.800936 kubelet[3455]: I1123 23:23:26.800920 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lsdj\" (UniqueName: \"kubernetes.io/projected/00c4c812-1be2-4af8-8852-44ec1c568070-kube-api-access-5lsdj\") pod \"calico-node-jvmbt\" (UID: \"00c4c812-1be2-4af8-8852-44ec1c568070\") " pod="calico-system/calico-node-jvmbt" Nov 23 23:23:26.800936 kubelet[3455]: I1123 23:23:26.800928 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/00c4c812-1be2-4af8-8852-44ec1c568070-var-run-calico\") pod \"calico-node-jvmbt\" (UID: \"00c4c812-1be2-4af8-8852-44ec1c568070\") " pod="calico-system/calico-node-jvmbt" Nov 23 23:23:26.851042 containerd[1892]: time="2025-11-23T23:23:26.850988306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85dbd66bf4-9qjl7,Uid:984ac8e6-bfff-4a27-90f5-0ad59429bc15,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:26.892383 containerd[1892]: time="2025-11-23T23:23:26.890794244Z" level=info msg="connecting to shim 4cef7dea3feed534ae89472a9534c9f3d3eb193e5f8fe0ab76d984dfd2546be4" address="unix:///run/containerd/s/0aa2230a896a7fa61cf4fad49a6f542da7dc99e4365ef85cfe98fc45f354c7be" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:26.904652 kubelet[3455]: E1123 23:23:26.904613 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.904652 kubelet[3455]: W1123 23:23:26.904634 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.904895 kubelet[3455]: E1123 23:23:26.904657 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.905819 kubelet[3455]: E1123 23:23:26.905723 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.905819 kubelet[3455]: W1123 23:23:26.905739 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.905819 kubelet[3455]: E1123 23:23:26.905751 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.906447 kubelet[3455]: E1123 23:23:26.906359 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.906447 kubelet[3455]: W1123 23:23:26.906374 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.906447 kubelet[3455]: E1123 23:23:26.906386 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.907654 kubelet[3455]: E1123 23:23:26.906526 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.907654 kubelet[3455]: W1123 23:23:26.906533 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.907654 kubelet[3455]: E1123 23:23:26.906540 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.908456 kubelet[3455]: E1123 23:23:26.908435 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.908456 kubelet[3455]: W1123 23:23:26.908451 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.908545 kubelet[3455]: E1123 23:23:26.908462 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.908614 kubelet[3455]: E1123 23:23:26.908600 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.908614 kubelet[3455]: W1123 23:23:26.908611 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.908676 kubelet[3455]: E1123 23:23:26.908618 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.914078 kubelet[3455]: E1123 23:23:26.914052 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.914078 kubelet[3455]: W1123 23:23:26.914071 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.914078 kubelet[3455]: E1123 23:23:26.914083 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.930203 kubelet[3455]: E1123 23:23:26.930170 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.930203 kubelet[3455]: W1123 23:23:26.930188 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.930203 kubelet[3455]: E1123 23:23:26.930198 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.930568 systemd[1]: Started cri-containerd-4cef7dea3feed534ae89472a9534c9f3d3eb193e5f8fe0ab76d984dfd2546be4.scope - libcontainer container 4cef7dea3feed534ae89472a9534c9f3d3eb193e5f8fe0ab76d984dfd2546be4. Nov 23 23:23:26.934712 kubelet[3455]: E1123 23:23:26.934501 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:23:26.977224 containerd[1892]: time="2025-11-23T23:23:26.977035536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85dbd66bf4-9qjl7,Uid:984ac8e6-bfff-4a27-90f5-0ad59429bc15,Namespace:calico-system,Attempt:0,} returns sandbox id \"4cef7dea3feed534ae89472a9534c9f3d3eb193e5f8fe0ab76d984dfd2546be4\"" Nov 23 23:23:26.980572 containerd[1892]: time="2025-11-23T23:23:26.980413350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 23:23:26.985510 kubelet[3455]: E1123 23:23:26.985482 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.985510 kubelet[3455]: W1123 23:23:26.985502 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.985798 kubelet[3455]: E1123 23:23:26.985517 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.986092 kubelet[3455]: E1123 23:23:26.986071 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.986154 kubelet[3455]: W1123 23:23:26.986090 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.986154 kubelet[3455]: E1123 23:23:26.986121 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.986542 kubelet[3455]: E1123 23:23:26.986257 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.986542 kubelet[3455]: W1123 23:23:26.986263 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.986542 kubelet[3455]: E1123 23:23:26.986270 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.986542 kubelet[3455]: E1123 23:23:26.986376 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.986542 kubelet[3455]: W1123 23:23:26.986381 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.986542 kubelet[3455]: E1123 23:23:26.986387 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.987022 kubelet[3455]: E1123 23:23:26.986600 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.987022 kubelet[3455]: W1123 23:23:26.986609 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.987022 kubelet[3455]: E1123 23:23:26.986618 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.987022 kubelet[3455]: E1123 23:23:26.987008 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.987022 kubelet[3455]: W1123 23:23:26.987019 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.987458 kubelet[3455]: E1123 23:23:26.987029 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.987458 kubelet[3455]: E1123 23:23:26.987155 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.987458 kubelet[3455]: W1123 23:23:26.987162 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.987458 kubelet[3455]: E1123 23:23:26.987168 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.987667 kubelet[3455]: E1123 23:23:26.987649 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.987667 kubelet[3455]: W1123 23:23:26.987663 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.987893 kubelet[3455]: E1123 23:23:26.987675 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.987893 kubelet[3455]: E1123 23:23:26.987805 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.987893 kubelet[3455]: W1123 23:23:26.987811 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.987893 kubelet[3455]: E1123 23:23:26.987818 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.988135 kubelet[3455]: E1123 23:23:26.987910 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.988135 kubelet[3455]: W1123 23:23:26.987915 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.988135 kubelet[3455]: E1123 23:23:26.987921 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.988135 kubelet[3455]: E1123 23:23:26.987994 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.988135 kubelet[3455]: W1123 23:23:26.987998 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.988135 kubelet[3455]: E1123 23:23:26.988003 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.988627 kubelet[3455]: E1123 23:23:26.988347 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.988627 kubelet[3455]: W1123 23:23:26.988356 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.988627 kubelet[3455]: E1123 23:23:26.988366 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.988847 kubelet[3455]: E1123 23:23:26.988830 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.988847 kubelet[3455]: W1123 23:23:26.988843 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.988847 kubelet[3455]: E1123 23:23:26.988853 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.989338 kubelet[3455]: E1123 23:23:26.989322 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.989338 kubelet[3455]: W1123 23:23:26.989334 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.989525 kubelet[3455]: E1123 23:23:26.989349 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.990074 kubelet[3455]: E1123 23:23:26.990057 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.990074 kubelet[3455]: W1123 23:23:26.990070 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.990331 kubelet[3455]: E1123 23:23:26.990081 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.990331 kubelet[3455]: E1123 23:23:26.990199 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.990331 kubelet[3455]: W1123 23:23:26.990205 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.990331 kubelet[3455]: E1123 23:23:26.990211 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.990331 kubelet[3455]: E1123 23:23:26.990331 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.990331 kubelet[3455]: W1123 23:23:26.990336 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.990331 kubelet[3455]: E1123 23:23:26.990342 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.990808 kubelet[3455]: E1123 23:23:26.990426 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.990808 kubelet[3455]: W1123 23:23:26.990431 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.990808 kubelet[3455]: E1123 23:23:26.990436 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.990808 kubelet[3455]: E1123 23:23:26.990513 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.990808 kubelet[3455]: W1123 23:23:26.990517 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.990808 kubelet[3455]: E1123 23:23:26.990521 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:26.991179 kubelet[3455]: E1123 23:23:26.991162 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:26.991179 kubelet[3455]: W1123 23:23:26.991174 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:26.991179 kubelet[3455]: E1123 23:23:26.991184 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.003013 kubelet[3455]: E1123 23:23:27.002971 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.003163 kubelet[3455]: W1123 23:23:27.003097 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.003163 kubelet[3455]: E1123 23:23:27.003114 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.003163 kubelet[3455]: I1123 23:23:27.003142 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dc4e36a9-c245-455d-ada2-16c405b7bde8-registration-dir\") pod \"csi-node-driver-mzswh\" (UID: \"dc4e36a9-c245-455d-ada2-16c405b7bde8\") " pod="calico-system/csi-node-driver-mzswh" Nov 23 23:23:27.003420 kubelet[3455]: E1123 23:23:27.003398 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.003420 kubelet[3455]: W1123 23:23:27.003415 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.003496 kubelet[3455]: E1123 23:23:27.003426 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.003553 kubelet[3455]: E1123 23:23:27.003540 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.003553 kubelet[3455]: W1123 23:23:27.003549 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.003601 kubelet[3455]: E1123 23:23:27.003557 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.003768 kubelet[3455]: E1123 23:23:27.003719 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.003768 kubelet[3455]: W1123 23:23:27.003728 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.003768 kubelet[3455]: E1123 23:23:27.003736 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.003839 kubelet[3455]: I1123 23:23:27.003778 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptkmc\" (UniqueName: \"kubernetes.io/projected/dc4e36a9-c245-455d-ada2-16c405b7bde8-kube-api-access-ptkmc\") pod \"csi-node-driver-mzswh\" (UID: \"dc4e36a9-c245-455d-ada2-16c405b7bde8\") " pod="calico-system/csi-node-driver-mzswh" Nov 23 23:23:27.003990 kubelet[3455]: E1123 23:23:27.003957 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.003990 kubelet[3455]: W1123 23:23:27.003970 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.003990 kubelet[3455]: E1123 23:23:27.003990 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.004096 kubelet[3455]: I1123 23:23:27.004005 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dc4e36a9-c245-455d-ada2-16c405b7bde8-varrun\") pod \"csi-node-driver-mzswh\" (UID: \"dc4e36a9-c245-455d-ada2-16c405b7bde8\") " pod="calico-system/csi-node-driver-mzswh" Nov 23 23:23:27.004156 kubelet[3455]: E1123 23:23:27.004149 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.004179 kubelet[3455]: W1123 23:23:27.004157 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.004179 kubelet[3455]: E1123 23:23:27.004165 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.004315 kubelet[3455]: I1123 23:23:27.004187 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dc4e36a9-c245-455d-ada2-16c405b7bde8-kubelet-dir\") pod \"csi-node-driver-mzswh\" (UID: \"dc4e36a9-c245-455d-ada2-16c405b7bde8\") " pod="calico-system/csi-node-driver-mzswh" Nov 23 23:23:27.004711 kubelet[3455]: E1123 23:23:27.004692 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.004711 kubelet[3455]: W1123 23:23:27.004706 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.004789 kubelet[3455]: E1123 23:23:27.004716 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.004901 kubelet[3455]: I1123 23:23:27.004882 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dc4e36a9-c245-455d-ada2-16c405b7bde8-socket-dir\") pod \"csi-node-driver-mzswh\" (UID: \"dc4e36a9-c245-455d-ada2-16c405b7bde8\") " pod="calico-system/csi-node-driver-mzswh" Nov 23 23:23:27.005046 kubelet[3455]: E1123 23:23:27.005029 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.005046 kubelet[3455]: W1123 23:23:27.005040 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.005046 kubelet[3455]: E1123 23:23:27.005049 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.005439 kubelet[3455]: E1123 23:23:27.005426 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.005439 kubelet[3455]: W1123 23:23:27.005436 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.005486 kubelet[3455]: E1123 23:23:27.005449 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.006297 kubelet[3455]: E1123 23:23:27.006273 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.006297 kubelet[3455]: W1123 23:23:27.006289 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.006297 kubelet[3455]: E1123 23:23:27.006299 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.006944 kubelet[3455]: E1123 23:23:27.006924 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.006944 kubelet[3455]: W1123 23:23:27.006939 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.007018 kubelet[3455]: E1123 23:23:27.006950 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.007619 kubelet[3455]: E1123 23:23:27.007510 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.007619 kubelet[3455]: W1123 23:23:27.007619 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.007698 kubelet[3455]: E1123 23:23:27.007633 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.008407 kubelet[3455]: E1123 23:23:27.008385 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.008407 kubelet[3455]: W1123 23:23:27.008400 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.008407 kubelet[3455]: E1123 23:23:27.008410 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.008672 kubelet[3455]: E1123 23:23:27.008655 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.008672 kubelet[3455]: W1123 23:23:27.008668 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.008733 kubelet[3455]: E1123 23:23:27.008678 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.009312 kubelet[3455]: E1123 23:23:27.009289 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.009312 kubelet[3455]: W1123 23:23:27.009307 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.009312 kubelet[3455]: E1123 23:23:27.009317 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.022647 containerd[1892]: time="2025-11-23T23:23:27.022607294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jvmbt,Uid:00c4c812-1be2-4af8-8852-44ec1c568070,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:27.062267 containerd[1892]: time="2025-11-23T23:23:27.061385174Z" level=info msg="connecting to shim 31929bb4c0575490d0d9230d083a65d04febdd4887420df5d548e7b96c856ea5" address="unix:///run/containerd/s/f3e113ecbb7ae136ab2f91580021ae5c48e60cb7475b9a0b5a87e96dacbb114a" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:27.085368 systemd[1]: Started cri-containerd-31929bb4c0575490d0d9230d083a65d04febdd4887420df5d548e7b96c856ea5.scope - libcontainer container 31929bb4c0575490d0d9230d083a65d04febdd4887420df5d548e7b96c856ea5. Nov 23 23:23:27.105950 containerd[1892]: time="2025-11-23T23:23:27.105911330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jvmbt,Uid:00c4c812-1be2-4af8-8852-44ec1c568070,Namespace:calico-system,Attempt:0,} returns sandbox id \"31929bb4c0575490d0d9230d083a65d04febdd4887420df5d548e7b96c856ea5\"" Nov 23 23:23:27.106221 kubelet[3455]: E1123 23:23:27.105925 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.106221 kubelet[3455]: W1123 23:23:27.106044 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.106221 kubelet[3455]: E1123 23:23:27.106062 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.106870 kubelet[3455]: E1123 23:23:27.106496 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.106870 kubelet[3455]: W1123 23:23:27.106507 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.106870 kubelet[3455]: E1123 23:23:27.106517 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.107504 kubelet[3455]: E1123 23:23:27.107407 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.107504 kubelet[3455]: W1123 23:23:27.107502 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.107813 kubelet[3455]: E1123 23:23:27.107516 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.108047 kubelet[3455]: E1123 23:23:27.108029 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.108047 kubelet[3455]: W1123 23:23:27.108042 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.108353 kubelet[3455]: E1123 23:23:27.108053 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.108926 kubelet[3455]: E1123 23:23:27.108743 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.108926 kubelet[3455]: W1123 23:23:27.108755 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.108926 kubelet[3455]: E1123 23:23:27.108766 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.109133 kubelet[3455]: E1123 23:23:27.108932 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.109133 kubelet[3455]: W1123 23:23:27.108940 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.109133 kubelet[3455]: E1123 23:23:27.108948 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.110341 kubelet[3455]: E1123 23:23:27.110324 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.110341 kubelet[3455]: W1123 23:23:27.110335 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.110428 kubelet[3455]: E1123 23:23:27.110359 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.110562 kubelet[3455]: E1123 23:23:27.110546 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.110562 kubelet[3455]: W1123 23:23:27.110556 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.110562 kubelet[3455]: E1123 23:23:27.110564 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.110868 kubelet[3455]: E1123 23:23:27.110849 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.110868 kubelet[3455]: W1123 23:23:27.110863 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.110939 kubelet[3455]: E1123 23:23:27.110873 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.111072 kubelet[3455]: E1123 23:23:27.111057 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.111072 kubelet[3455]: W1123 23:23:27.111069 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.111132 kubelet[3455]: E1123 23:23:27.111079 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.111263 kubelet[3455]: E1123 23:23:27.111232 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.111263 kubelet[3455]: W1123 23:23:27.111261 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.111331 kubelet[3455]: E1123 23:23:27.111269 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.111404 kubelet[3455]: E1123 23:23:27.111390 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.111404 kubelet[3455]: W1123 23:23:27.111399 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.111456 kubelet[3455]: E1123 23:23:27.111411 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.111686 kubelet[3455]: E1123 23:23:27.111669 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.111686 kubelet[3455]: W1123 23:23:27.111680 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.111686 kubelet[3455]: E1123 23:23:27.111688 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.112261 kubelet[3455]: E1123 23:23:27.112228 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.112261 kubelet[3455]: W1123 23:23:27.112260 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.112334 kubelet[3455]: E1123 23:23:27.112271 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.113303 kubelet[3455]: E1123 23:23:27.113237 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.113303 kubelet[3455]: W1123 23:23:27.113298 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.113446 kubelet[3455]: E1123 23:23:27.113309 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.113785 kubelet[3455]: E1123 23:23:27.113765 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.113785 kubelet[3455]: W1123 23:23:27.113782 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.113857 kubelet[3455]: E1123 23:23:27.113795 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.114185 kubelet[3455]: E1123 23:23:27.114164 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.114185 kubelet[3455]: W1123 23:23:27.114181 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.114254 kubelet[3455]: E1123 23:23:27.114192 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.114554 kubelet[3455]: E1123 23:23:27.114536 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.114554 kubelet[3455]: W1123 23:23:27.114550 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.114620 kubelet[3455]: E1123 23:23:27.114560 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.115082 kubelet[3455]: E1123 23:23:27.115062 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.115082 kubelet[3455]: W1123 23:23:27.115077 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.115136 kubelet[3455]: E1123 23:23:27.115088 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.115417 kubelet[3455]: E1123 23:23:27.115395 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.115417 kubelet[3455]: W1123 23:23:27.115411 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.115478 kubelet[3455]: E1123 23:23:27.115422 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.115874 kubelet[3455]: E1123 23:23:27.115858 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.115874 kubelet[3455]: W1123 23:23:27.115870 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.116270 kubelet[3455]: E1123 23:23:27.115881 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.116270 kubelet[3455]: E1123 23:23:27.116087 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.116270 kubelet[3455]: W1123 23:23:27.116094 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.116270 kubelet[3455]: E1123 23:23:27.116103 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.116368 kubelet[3455]: E1123 23:23:27.116274 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.116368 kubelet[3455]: W1123 23:23:27.116281 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.116368 kubelet[3455]: E1123 23:23:27.116289 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.116502 kubelet[3455]: E1123 23:23:27.116467 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.116502 kubelet[3455]: W1123 23:23:27.116474 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.116502 kubelet[3455]: E1123 23:23:27.116481 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.116686 kubelet[3455]: E1123 23:23:27.116669 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.116686 kubelet[3455]: W1123 23:23:27.116680 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.116686 kubelet[3455]: E1123 23:23:27.116687 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:27.125409 kubelet[3455]: E1123 23:23:27.125301 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:27.125409 kubelet[3455]: W1123 23:23:27.125314 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:27.125409 kubelet[3455]: E1123 23:23:27.125326 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:28.025918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3486506926.mount: Deactivated successfully. Nov 23 23:23:28.483803 containerd[1892]: time="2025-11-23T23:23:28.483648883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:28.485985 containerd[1892]: time="2025-11-23T23:23:28.485943470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 23 23:23:28.488469 containerd[1892]: time="2025-11-23T23:23:28.488429431Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:28.491461 containerd[1892]: time="2025-11-23T23:23:28.491418201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:28.491958 containerd[1892]: time="2025-11-23T23:23:28.491673161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.51123041s" Nov 23 23:23:28.491958 containerd[1892]: time="2025-11-23T23:23:28.491697762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 23:23:28.492450 containerd[1892]: time="2025-11-23T23:23:28.492429354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 23:23:28.506369 containerd[1892]: time="2025-11-23T23:23:28.506339927Z" level=info msg="CreateContainer within sandbox \"4cef7dea3feed534ae89472a9534c9f3d3eb193e5f8fe0ab76d984dfd2546be4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 23:23:28.522097 containerd[1892]: time="2025-11-23T23:23:28.522000958Z" level=info msg="Container db34e03477f0f028172c6a392d3b4909001b61c8f5e84654cb6e9fcb52fa7642: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:28.536156 containerd[1892]: time="2025-11-23T23:23:28.536121370Z" level=info msg="CreateContainer within sandbox \"4cef7dea3feed534ae89472a9534c9f3d3eb193e5f8fe0ab76d984dfd2546be4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"db34e03477f0f028172c6a392d3b4909001b61c8f5e84654cb6e9fcb52fa7642\"" Nov 23 23:23:28.537753 containerd[1892]: time="2025-11-23T23:23:28.537725398Z" level=info msg="StartContainer for \"db34e03477f0f028172c6a392d3b4909001b61c8f5e84654cb6e9fcb52fa7642\"" Nov 23 23:23:28.539237 containerd[1892]: time="2025-11-23T23:23:28.539206775Z" level=info msg="connecting to shim db34e03477f0f028172c6a392d3b4909001b61c8f5e84654cb6e9fcb52fa7642" address="unix:///run/containerd/s/0aa2230a896a7fa61cf4fad49a6f542da7dc99e4365ef85cfe98fc45f354c7be" protocol=ttrpc version=3 Nov 23 23:23:28.559356 systemd[1]: Started cri-containerd-db34e03477f0f028172c6a392d3b4909001b61c8f5e84654cb6e9fcb52fa7642.scope - libcontainer container db34e03477f0f028172c6a392d3b4909001b61c8f5e84654cb6e9fcb52fa7642. Nov 23 23:23:28.591754 containerd[1892]: time="2025-11-23T23:23:28.591727383Z" level=info msg="StartContainer for \"db34e03477f0f028172c6a392d3b4909001b61c8f5e84654cb6e9fcb52fa7642\" returns successfully" Nov 23 23:23:29.093679 kubelet[3455]: E1123 23:23:29.093632 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:23:29.161811 kubelet[3455]: I1123 23:23:29.161741 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85dbd66bf4-9qjl7" podStartSLOduration=1.6489145139999999 podStartE2EDuration="3.161719208s" podCreationTimestamp="2025-11-23 23:23:26 +0000 UTC" firstStartedPulling="2025-11-23 23:23:26.979419357 +0000 UTC m=+18.976459474" lastFinishedPulling="2025-11-23 23:23:28.492224051 +0000 UTC m=+20.489264168" observedRunningTime="2025-11-23 23:23:29.160908982 +0000 UTC m=+21.157949107" watchObservedRunningTime="2025-11-23 23:23:29.161719208 +0000 UTC m=+21.158759325" Nov 23 23:23:29.203267 kubelet[3455]: E1123 23:23:29.203229 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.203267 kubelet[3455]: W1123 23:23:29.203253 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.203267 kubelet[3455]: E1123 23:23:29.203272 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.203411 kubelet[3455]: E1123 23:23:29.203395 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.203411 kubelet[3455]: W1123 23:23:29.203405 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.203491 kubelet[3455]: E1123 23:23:29.203416 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.203532 kubelet[3455]: E1123 23:23:29.203520 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.203532 kubelet[3455]: W1123 23:23:29.203527 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.203532 kubelet[3455]: E1123 23:23:29.203533 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.203657 kubelet[3455]: E1123 23:23:29.203645 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.203657 kubelet[3455]: W1123 23:23:29.203653 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.203657 kubelet[3455]: E1123 23:23:29.203659 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.203767 kubelet[3455]: E1123 23:23:29.203755 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.203767 kubelet[3455]: W1123 23:23:29.203763 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.203767 kubelet[3455]: E1123 23:23:29.203768 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.203857 kubelet[3455]: E1123 23:23:29.203849 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.203857 kubelet[3455]: W1123 23:23:29.203853 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.203925 kubelet[3455]: E1123 23:23:29.203858 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.203957 kubelet[3455]: E1123 23:23:29.203947 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.203957 kubelet[3455]: W1123 23:23:29.203951 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.203957 kubelet[3455]: E1123 23:23:29.203957 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.204079 kubelet[3455]: E1123 23:23:29.204050 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.204079 kubelet[3455]: W1123 23:23:29.204057 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.204079 kubelet[3455]: E1123 23:23:29.204063 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.204215 kubelet[3455]: E1123 23:23:29.204202 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.204215 kubelet[3455]: W1123 23:23:29.204212 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.204287 kubelet[3455]: E1123 23:23:29.204219 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.204351 kubelet[3455]: E1123 23:23:29.204338 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.204351 kubelet[3455]: W1123 23:23:29.204346 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.204395 kubelet[3455]: E1123 23:23:29.204352 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.204456 kubelet[3455]: E1123 23:23:29.204442 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.204456 kubelet[3455]: W1123 23:23:29.204451 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.204511 kubelet[3455]: E1123 23:23:29.204463 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.204565 kubelet[3455]: E1123 23:23:29.204552 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.204565 kubelet[3455]: W1123 23:23:29.204560 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.204565 kubelet[3455]: E1123 23:23:29.204565 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.204673 kubelet[3455]: E1123 23:23:29.204658 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.204673 kubelet[3455]: W1123 23:23:29.204667 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.204673 kubelet[3455]: E1123 23:23:29.204672 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.204770 kubelet[3455]: E1123 23:23:29.204758 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.204770 kubelet[3455]: W1123 23:23:29.204765 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.204770 kubelet[3455]: E1123 23:23:29.204770 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.204879 kubelet[3455]: E1123 23:23:29.204868 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.204879 kubelet[3455]: W1123 23:23:29.204875 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.204915 kubelet[3455]: E1123 23:23:29.204880 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.226255 kubelet[3455]: E1123 23:23:29.226208 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.226255 kubelet[3455]: W1123 23:23:29.226222 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.226255 kubelet[3455]: E1123 23:23:29.226232 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.226760 kubelet[3455]: E1123 23:23:29.226682 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.226760 kubelet[3455]: W1123 23:23:29.226696 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.226760 kubelet[3455]: E1123 23:23:29.226706 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.226888 kubelet[3455]: E1123 23:23:29.226865 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.226888 kubelet[3455]: W1123 23:23:29.226883 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.226960 kubelet[3455]: E1123 23:23:29.226893 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.227035 kubelet[3455]: E1123 23:23:29.227022 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.227035 kubelet[3455]: W1123 23:23:29.227031 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.227098 kubelet[3455]: E1123 23:23:29.227038 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.227144 kubelet[3455]: E1123 23:23:29.227132 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.227180 kubelet[3455]: W1123 23:23:29.227147 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.227180 kubelet[3455]: E1123 23:23:29.227154 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.227286 kubelet[3455]: E1123 23:23:29.227273 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.227286 kubelet[3455]: W1123 23:23:29.227282 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.227325 kubelet[3455]: E1123 23:23:29.227288 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.227685 kubelet[3455]: E1123 23:23:29.227602 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.227685 kubelet[3455]: W1123 23:23:29.227613 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.227685 kubelet[3455]: E1123 23:23:29.227624 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.228067 kubelet[3455]: E1123 23:23:29.227982 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.228067 kubelet[3455]: W1123 23:23:29.228010 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.228067 kubelet[3455]: E1123 23:23:29.228022 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.228367 kubelet[3455]: E1123 23:23:29.228355 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.228530 kubelet[3455]: W1123 23:23:29.228431 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.228530 kubelet[3455]: E1123 23:23:29.228446 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.228645 kubelet[3455]: E1123 23:23:29.228636 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.228686 kubelet[3455]: W1123 23:23:29.228678 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.228733 kubelet[3455]: E1123 23:23:29.228723 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.228909 kubelet[3455]: E1123 23:23:29.228900 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.229014 kubelet[3455]: W1123 23:23:29.228954 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.229014 kubelet[3455]: E1123 23:23:29.228967 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.229198 kubelet[3455]: E1123 23:23:29.229189 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.229693 kubelet[3455]: W1123 23:23:29.229567 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.229693 kubelet[3455]: E1123 23:23:29.229589 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.229948 kubelet[3455]: E1123 23:23:29.229812 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.229948 kubelet[3455]: W1123 23:23:29.229822 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.229948 kubelet[3455]: E1123 23:23:29.229831 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.230034 kubelet[3455]: E1123 23:23:29.230017 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.230055 kubelet[3455]: W1123 23:23:29.230032 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.230055 kubelet[3455]: E1123 23:23:29.230042 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.230169 kubelet[3455]: E1123 23:23:29.230156 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.230169 kubelet[3455]: W1123 23:23:29.230165 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.230226 kubelet[3455]: E1123 23:23:29.230172 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.230321 kubelet[3455]: E1123 23:23:29.230307 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.230321 kubelet[3455]: W1123 23:23:29.230316 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.230365 kubelet[3455]: E1123 23:23:29.230324 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.230584 kubelet[3455]: E1123 23:23:29.230573 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.230645 kubelet[3455]: W1123 23:23:29.230635 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.230697 kubelet[3455]: E1123 23:23:29.230687 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.230940 kubelet[3455]: E1123 23:23:29.230902 3455 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:29.230940 kubelet[3455]: W1123 23:23:29.230912 3455 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:29.230940 kubelet[3455]: E1123 23:23:29.230920 3455 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:29.571783 containerd[1892]: time="2025-11-23T23:23:29.571724257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:29.574114 containerd[1892]: time="2025-11-23T23:23:29.573994427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 23 23:23:29.576827 containerd[1892]: time="2025-11-23T23:23:29.576797310Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:29.580183 containerd[1892]: time="2025-11-23T23:23:29.580147084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:29.580707 containerd[1892]: time="2025-11-23T23:23:29.580418548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.087887783s" Nov 23 23:23:29.580707 containerd[1892]: time="2025-11-23T23:23:29.580447269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 23:23:29.586054 containerd[1892]: time="2025-11-23T23:23:29.586022995Z" level=info msg="CreateContainer within sandbox \"31929bb4c0575490d0d9230d083a65d04febdd4887420df5d548e7b96c856ea5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 23:23:29.603908 containerd[1892]: time="2025-11-23T23:23:29.603882026Z" level=info msg="Container e2c0162f4aa6556ec6ced79e13e9dad788194fee71202b242dc6632821578afe: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:29.605817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount693637932.mount: Deactivated successfully. Nov 23 23:23:29.625542 containerd[1892]: time="2025-11-23T23:23:29.625510139Z" level=info msg="CreateContainer within sandbox \"31929bb4c0575490d0d9230d083a65d04febdd4887420df5d548e7b96c856ea5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e2c0162f4aa6556ec6ced79e13e9dad788194fee71202b242dc6632821578afe\"" Nov 23 23:23:29.626069 containerd[1892]: time="2025-11-23T23:23:29.626038772Z" level=info msg="StartContainer for \"e2c0162f4aa6556ec6ced79e13e9dad788194fee71202b242dc6632821578afe\"" Nov 23 23:23:29.634315 containerd[1892]: time="2025-11-23T23:23:29.634291209Z" level=info msg="connecting to shim e2c0162f4aa6556ec6ced79e13e9dad788194fee71202b242dc6632821578afe" address="unix:///run/containerd/s/f3e113ecbb7ae136ab2f91580021ae5c48e60cb7475b9a0b5a87e96dacbb114a" protocol=ttrpc version=3 Nov 23 23:23:29.655358 systemd[1]: Started cri-containerd-e2c0162f4aa6556ec6ced79e13e9dad788194fee71202b242dc6632821578afe.scope - libcontainer container e2c0162f4aa6556ec6ced79e13e9dad788194fee71202b242dc6632821578afe. Nov 23 23:23:29.712043 containerd[1892]: time="2025-11-23T23:23:29.711982030Z" level=info msg="StartContainer for \"e2c0162f4aa6556ec6ced79e13e9dad788194fee71202b242dc6632821578afe\" returns successfully" Nov 23 23:23:29.721804 systemd[1]: cri-containerd-e2c0162f4aa6556ec6ced79e13e9dad788194fee71202b242dc6632821578afe.scope: Deactivated successfully. Nov 23 23:23:29.726352 containerd[1892]: time="2025-11-23T23:23:29.726326818Z" level=info msg="received container exit event container_id:\"e2c0162f4aa6556ec6ced79e13e9dad788194fee71202b242dc6632821578afe\" id:\"e2c0162f4aa6556ec6ced79e13e9dad788194fee71202b242dc6632821578afe\" pid:4135 exited_at:{seconds:1763940209 nanos:726016928}" Nov 23 23:23:29.741142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2c0162f4aa6556ec6ced79e13e9dad788194fee71202b242dc6632821578afe-rootfs.mount: Deactivated successfully. Nov 23 23:23:30.153034 kubelet[3455]: I1123 23:23:30.152745 3455 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:23:31.093065 kubelet[3455]: E1123 23:23:31.093021 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:23:31.157773 containerd[1892]: time="2025-11-23T23:23:31.157738209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 23:23:33.093003 kubelet[3455]: E1123 23:23:33.092949 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:23:33.244409 containerd[1892]: time="2025-11-23T23:23:33.244366966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:33.247164 containerd[1892]: time="2025-11-23T23:23:33.247136888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 23:23:33.249351 containerd[1892]: time="2025-11-23T23:23:33.249305006Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:33.252311 containerd[1892]: time="2025-11-23T23:23:33.252273006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:33.252696 containerd[1892]: time="2025-11-23T23:23:33.252557767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.094787109s" Nov 23 23:23:33.252696 containerd[1892]: time="2025-11-23T23:23:33.252583520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 23:23:33.259627 containerd[1892]: time="2025-11-23T23:23:33.259598459Z" level=info msg="CreateContainer within sandbox \"31929bb4c0575490d0d9230d083a65d04febdd4887420df5d548e7b96c856ea5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 23:23:33.278107 containerd[1892]: time="2025-11-23T23:23:33.278076017Z" level=info msg="Container 2b895b0f4458cab53fd81adce2f80f421586fdf4c234a1db53ba4f3534687f87: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:33.292825 containerd[1892]: time="2025-11-23T23:23:33.292794582Z" level=info msg="CreateContainer within sandbox \"31929bb4c0575490d0d9230d083a65d04febdd4887420df5d548e7b96c856ea5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2b895b0f4458cab53fd81adce2f80f421586fdf4c234a1db53ba4f3534687f87\"" Nov 23 23:23:33.293405 containerd[1892]: time="2025-11-23T23:23:33.293382097Z" level=info msg="StartContainer for \"2b895b0f4458cab53fd81adce2f80f421586fdf4c234a1db53ba4f3534687f87\"" Nov 23 23:23:33.294411 containerd[1892]: time="2025-11-23T23:23:33.294388002Z" level=info msg="connecting to shim 2b895b0f4458cab53fd81adce2f80f421586fdf4c234a1db53ba4f3534687f87" address="unix:///run/containerd/s/f3e113ecbb7ae136ab2f91580021ae5c48e60cb7475b9a0b5a87e96dacbb114a" protocol=ttrpc version=3 Nov 23 23:23:33.318361 systemd[1]: Started cri-containerd-2b895b0f4458cab53fd81adce2f80f421586fdf4c234a1db53ba4f3534687f87.scope - libcontainer container 2b895b0f4458cab53fd81adce2f80f421586fdf4c234a1db53ba4f3534687f87. Nov 23 23:23:33.383787 containerd[1892]: time="2025-11-23T23:23:33.383650508Z" level=info msg="StartContainer for \"2b895b0f4458cab53fd81adce2f80f421586fdf4c234a1db53ba4f3534687f87\" returns successfully" Nov 23 23:23:34.576219 containerd[1892]: time="2025-11-23T23:23:34.576165761Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:23:34.580357 systemd[1]: cri-containerd-2b895b0f4458cab53fd81adce2f80f421586fdf4c234a1db53ba4f3534687f87.scope: Deactivated successfully. Nov 23 23:23:34.580589 systemd[1]: cri-containerd-2b895b0f4458cab53fd81adce2f80f421586fdf4c234a1db53ba4f3534687f87.scope: Consumed 305ms CPU time, 184.9M memory peak, 165.9M written to disk. Nov 23 23:23:34.582976 containerd[1892]: time="2025-11-23T23:23:34.582854786Z" level=info msg="received container exit event container_id:\"2b895b0f4458cab53fd81adce2f80f421586fdf4c234a1db53ba4f3534687f87\" id:\"2b895b0f4458cab53fd81adce2f80f421586fdf4c234a1db53ba4f3534687f87\" pid:4197 exited_at:{seconds:1763940214 nanos:582667964}" Nov 23 23:23:34.599006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b895b0f4458cab53fd81adce2f80f421586fdf4c234a1db53ba4f3534687f87-rootfs.mount: Deactivated successfully. Nov 23 23:23:34.635295 kubelet[3455]: I1123 23:23:34.633694 3455 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 23 23:23:35.479998 systemd[1]: Created slice kubepods-burstable-pod6b2f94be_84e3_478d_b100_8fe53aa25912.slice - libcontainer container kubepods-burstable-pod6b2f94be_84e3_478d_b100_8fe53aa25912.slice. Nov 23 23:23:35.495667 systemd[1]: Created slice kubepods-besteffort-pod15476924_84af_4e25_8a82_221156412f5f.slice - libcontainer container kubepods-besteffort-pod15476924_84af_4e25_8a82_221156412f5f.slice. Nov 23 23:23:35.503443 systemd[1]: Created slice kubepods-burstable-podcd7fa0b5_bc3c_4086_bd6e_a9ce06dcf4b9.slice - libcontainer container kubepods-burstable-podcd7fa0b5_bc3c_4086_bd6e_a9ce06dcf4b9.slice. Nov 23 23:23:35.510532 systemd[1]: Created slice kubepods-besteffort-poddc4e36a9_c245_455d_ada2_16c405b7bde8.slice - libcontainer container kubepods-besteffort-poddc4e36a9_c245_455d_ada2_16c405b7bde8.slice. Nov 23 23:23:35.515727 containerd[1892]: time="2025-11-23T23:23:35.515509672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzswh,Uid:dc4e36a9-c245-455d-ada2-16c405b7bde8,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:35.520455 systemd[1]: Created slice kubepods-besteffort-pod7afbc6db_85a8_4a42_8680_e685d44be238.slice - libcontainer container kubepods-besteffort-pod7afbc6db_85a8_4a42_8680_e685d44be238.slice. Nov 23 23:23:35.531891 systemd[1]: Created slice kubepods-besteffort-pod99d699bb_d1ee_4f49_ad11_d74927648299.slice - libcontainer container kubepods-besteffort-pod99d699bb_d1ee_4f49_ad11_d74927648299.slice. Nov 23 23:23:35.549725 systemd[1]: Created slice kubepods-besteffort-podf59281aa_b935_4d43_8373_69a621420431.slice - libcontainer container kubepods-besteffort-podf59281aa_b935_4d43_8373_69a621420431.slice. Nov 23 23:23:35.556501 systemd[1]: Created slice kubepods-besteffort-podbe1e52dc_0aab_46fe_876a_12be408713eb.slice - libcontainer container kubepods-besteffort-podbe1e52dc_0aab_46fe_876a_12be408713eb.slice. Nov 23 23:23:35.569978 kubelet[3455]: I1123 23:23:35.569944 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvr7k\" (UniqueName: \"kubernetes.io/projected/99d699bb-d1ee-4f49-ad11-d74927648299-kube-api-access-gvr7k\") pod \"whisker-949496b99-r5rdh\" (UID: \"99d699bb-d1ee-4f49-ad11-d74927648299\") " pod="calico-system/whisker-949496b99-r5rdh" Nov 23 23:23:35.570059 kubelet[3455]: I1123 23:23:35.569991 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7hn2\" (UniqueName: \"kubernetes.io/projected/cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9-kube-api-access-g7hn2\") pod \"coredns-674b8bbfcf-87pfq\" (UID: \"cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9\") " pod="kube-system/coredns-674b8bbfcf-87pfq" Nov 23 23:23:35.570059 kubelet[3455]: I1123 23:23:35.570006 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be1e52dc-0aab-46fe-876a-12be408713eb-goldmane-ca-bundle\") pod \"goldmane-666569f655-hdfkh\" (UID: \"be1e52dc-0aab-46fe-876a-12be408713eb\") " pod="calico-system/goldmane-666569f655-hdfkh" Nov 23 23:23:35.570059 kubelet[3455]: I1123 23:23:35.570017 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r77bk\" (UniqueName: \"kubernetes.io/projected/15476924-84af-4e25-8a82-221156412f5f-kube-api-access-r77bk\") pod \"calico-kube-controllers-56586bc88d-xbb5n\" (UID: \"15476924-84af-4e25-8a82-221156412f5f\") " pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" Nov 23 23:23:35.570059 kubelet[3455]: I1123 23:23:35.570028 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d699bb-d1ee-4f49-ad11-d74927648299-whisker-ca-bundle\") pod \"whisker-949496b99-r5rdh\" (UID: \"99d699bb-d1ee-4f49-ad11-d74927648299\") " pod="calico-system/whisker-949496b99-r5rdh" Nov 23 23:23:35.570059 kubelet[3455]: I1123 23:23:35.570038 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrsrj\" (UniqueName: \"kubernetes.io/projected/7afbc6db-85a8-4a42-8680-e685d44be238-kube-api-access-jrsrj\") pod \"calico-apiserver-5d4f684bcb-cgczn\" (UID: \"7afbc6db-85a8-4a42-8680-e685d44be238\") " pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" Nov 23 23:23:35.570156 kubelet[3455]: I1123 23:23:35.570068 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9-config-volume\") pod \"coredns-674b8bbfcf-87pfq\" (UID: \"cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9\") " pod="kube-system/coredns-674b8bbfcf-87pfq" Nov 23 23:23:35.570156 kubelet[3455]: I1123 23:23:35.570078 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6qkc\" (UniqueName: \"kubernetes.io/projected/6b2f94be-84e3-478d-b100-8fe53aa25912-kube-api-access-v6qkc\") pod \"coredns-674b8bbfcf-dhkjl\" (UID: \"6b2f94be-84e3-478d-b100-8fe53aa25912\") " pod="kube-system/coredns-674b8bbfcf-dhkjl" Nov 23 23:23:35.570156 kubelet[3455]: I1123 23:23:35.570089 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7afbc6db-85a8-4a42-8680-e685d44be238-calico-apiserver-certs\") pod \"calico-apiserver-5d4f684bcb-cgczn\" (UID: \"7afbc6db-85a8-4a42-8680-e685d44be238\") " pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" Nov 23 23:23:35.570156 kubelet[3455]: I1123 23:23:35.570097 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlrtt\" (UniqueName: \"kubernetes.io/projected/f59281aa-b935-4d43-8373-69a621420431-kube-api-access-hlrtt\") pod \"calico-apiserver-5d4f684bcb-7dv89\" (UID: \"f59281aa-b935-4d43-8373-69a621420431\") " pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" Nov 23 23:23:35.570156 kubelet[3455]: I1123 23:23:35.570109 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/99d699bb-d1ee-4f49-ad11-d74927648299-whisker-backend-key-pair\") pod \"whisker-949496b99-r5rdh\" (UID: \"99d699bb-d1ee-4f49-ad11-d74927648299\") " pod="calico-system/whisker-949496b99-r5rdh" Nov 23 23:23:35.570231 kubelet[3455]: I1123 23:23:35.570120 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be1e52dc-0aab-46fe-876a-12be408713eb-config\") pod \"goldmane-666569f655-hdfkh\" (UID: \"be1e52dc-0aab-46fe-876a-12be408713eb\") " pod="calico-system/goldmane-666569f655-hdfkh" Nov 23 23:23:35.570231 kubelet[3455]: I1123 23:23:35.570130 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/be1e52dc-0aab-46fe-876a-12be408713eb-goldmane-key-pair\") pod \"goldmane-666569f655-hdfkh\" (UID: \"be1e52dc-0aab-46fe-876a-12be408713eb\") " pod="calico-system/goldmane-666569f655-hdfkh" Nov 23 23:23:35.570231 kubelet[3455]: I1123 23:23:35.570141 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15476924-84af-4e25-8a82-221156412f5f-tigera-ca-bundle\") pod \"calico-kube-controllers-56586bc88d-xbb5n\" (UID: \"15476924-84af-4e25-8a82-221156412f5f\") " pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" Nov 23 23:23:35.570231 kubelet[3455]: I1123 23:23:35.570151 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b2f94be-84e3-478d-b100-8fe53aa25912-config-volume\") pod \"coredns-674b8bbfcf-dhkjl\" (UID: \"6b2f94be-84e3-478d-b100-8fe53aa25912\") " pod="kube-system/coredns-674b8bbfcf-dhkjl" Nov 23 23:23:35.570231 kubelet[3455]: I1123 23:23:35.570160 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f59281aa-b935-4d43-8373-69a621420431-calico-apiserver-certs\") pod \"calico-apiserver-5d4f684bcb-7dv89\" (UID: \"f59281aa-b935-4d43-8373-69a621420431\") " pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" Nov 23 23:23:35.570335 kubelet[3455]: I1123 23:23:35.570173 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf8jh\" (UniqueName: \"kubernetes.io/projected/be1e52dc-0aab-46fe-876a-12be408713eb-kube-api-access-kf8jh\") pod \"goldmane-666569f655-hdfkh\" (UID: \"be1e52dc-0aab-46fe-876a-12be408713eb\") " pod="calico-system/goldmane-666569f655-hdfkh" Nov 23 23:23:35.584751 containerd[1892]: time="2025-11-23T23:23:35.584714217Z" level=error msg="Failed to destroy network for sandbox \"b428025c67f135a042463da3fcf1704bacc0d06c3d21985082b3929664f2f2cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.586121 systemd[1]: run-netns-cni\x2d5edc065e\x2d565c\x2daffa\x2d16e0\x2dda88beb288ca.mount: Deactivated successfully. Nov 23 23:23:35.589828 containerd[1892]: time="2025-11-23T23:23:35.589794158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzswh,Uid:dc4e36a9-c245-455d-ada2-16c405b7bde8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b428025c67f135a042463da3fcf1704bacc0d06c3d21985082b3929664f2f2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.590012 kubelet[3455]: E1123 23:23:35.589965 3455 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b428025c67f135a042463da3fcf1704bacc0d06c3d21985082b3929664f2f2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.590073 kubelet[3455]: E1123 23:23:35.590029 3455 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b428025c67f135a042463da3fcf1704bacc0d06c3d21985082b3929664f2f2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mzswh" Nov 23 23:23:35.590073 kubelet[3455]: E1123 23:23:35.590043 3455 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b428025c67f135a042463da3fcf1704bacc0d06c3d21985082b3929664f2f2cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mzswh" Nov 23 23:23:35.590120 kubelet[3455]: E1123 23:23:35.590077 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mzswh_calico-system(dc4e36a9-c245-455d-ada2-16c405b7bde8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mzswh_calico-system(dc4e36a9-c245-455d-ada2-16c405b7bde8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b428025c67f135a042463da3fcf1704bacc0d06c3d21985082b3929664f2f2cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:23:35.783596 containerd[1892]: time="2025-11-23T23:23:35.783301511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhkjl,Uid:6b2f94be-84e3-478d-b100-8fe53aa25912,Namespace:kube-system,Attempt:0,}" Nov 23 23:23:35.800316 containerd[1892]: time="2025-11-23T23:23:35.800285997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56586bc88d-xbb5n,Uid:15476924-84af-4e25-8a82-221156412f5f,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:35.810374 containerd[1892]: time="2025-11-23T23:23:35.810348379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-87pfq,Uid:cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9,Namespace:kube-system,Attempt:0,}" Nov 23 23:23:35.824337 containerd[1892]: time="2025-11-23T23:23:35.824306087Z" level=error msg="Failed to destroy network for sandbox \"ee9cac05c066967747941ea64e3e45b3898ce6dac29fe74af7afd3664e258d4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.829651 containerd[1892]: time="2025-11-23T23:23:35.829602779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhkjl,Uid:6b2f94be-84e3-478d-b100-8fe53aa25912,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee9cac05c066967747941ea64e3e45b3898ce6dac29fe74af7afd3664e258d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.829869 containerd[1892]: time="2025-11-23T23:23:35.829853171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4f684bcb-cgczn,Uid:7afbc6db-85a8-4a42-8680-e685d44be238,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:23:35.830056 kubelet[3455]: E1123 23:23:35.830005 3455 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee9cac05c066967747941ea64e3e45b3898ce6dac29fe74af7afd3664e258d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.830056 kubelet[3455]: E1123 23:23:35.830053 3455 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee9cac05c066967747941ea64e3e45b3898ce6dac29fe74af7afd3664e258d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dhkjl" Nov 23 23:23:35.830800 kubelet[3455]: E1123 23:23:35.830068 3455 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee9cac05c066967747941ea64e3e45b3898ce6dac29fe74af7afd3664e258d4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dhkjl" Nov 23 23:23:35.830800 kubelet[3455]: E1123 23:23:35.830109 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dhkjl_kube-system(6b2f94be-84e3-478d-b100-8fe53aa25912)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dhkjl_kube-system(6b2f94be-84e3-478d-b100-8fe53aa25912)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee9cac05c066967747941ea64e3e45b3898ce6dac29fe74af7afd3664e258d4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dhkjl" podUID="6b2f94be-84e3-478d-b100-8fe53aa25912" Nov 23 23:23:35.838800 containerd[1892]: time="2025-11-23T23:23:35.838770467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-949496b99-r5rdh,Uid:99d699bb-d1ee-4f49-ad11-d74927648299,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:35.853803 containerd[1892]: time="2025-11-23T23:23:35.853759425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4f684bcb-7dv89,Uid:f59281aa-b935-4d43-8373-69a621420431,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:23:35.862935 containerd[1892]: time="2025-11-23T23:23:35.862903641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hdfkh,Uid:be1e52dc-0aab-46fe-876a-12be408713eb,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:35.863736 containerd[1892]: time="2025-11-23T23:23:35.863573807Z" level=error msg="Failed to destroy network for sandbox \"eec5ad4552c178955f73ce20981a390e1e0657bdace5b10e7a644f00fd2b008d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.879656 containerd[1892]: time="2025-11-23T23:23:35.879618094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56586bc88d-xbb5n,Uid:15476924-84af-4e25-8a82-221156412f5f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec5ad4552c178955f73ce20981a390e1e0657bdace5b10e7a644f00fd2b008d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.879960 kubelet[3455]: E1123 23:23:35.879920 3455 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec5ad4552c178955f73ce20981a390e1e0657bdace5b10e7a644f00fd2b008d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.880181 kubelet[3455]: E1123 23:23:35.880096 3455 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec5ad4552c178955f73ce20981a390e1e0657bdace5b10e7a644f00fd2b008d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" Nov 23 23:23:35.880181 kubelet[3455]: E1123 23:23:35.880132 3455 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec5ad4552c178955f73ce20981a390e1e0657bdace5b10e7a644f00fd2b008d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" Nov 23 23:23:35.880528 kubelet[3455]: E1123 23:23:35.880293 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56586bc88d-xbb5n_calico-system(15476924-84af-4e25-8a82-221156412f5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56586bc88d-xbb5n_calico-system(15476924-84af-4e25-8a82-221156412f5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eec5ad4552c178955f73ce20981a390e1e0657bdace5b10e7a644f00fd2b008d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f" Nov 23 23:23:35.898791 containerd[1892]: time="2025-11-23T23:23:35.898767546Z" level=error msg="Failed to destroy network for sandbox \"c2a3e503f355a40345095b3d2eeaff0c7e2f1480913e656be38981c31f6472b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.903806 containerd[1892]: time="2025-11-23T23:23:35.903733123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-87pfq,Uid:cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a3e503f355a40345095b3d2eeaff0c7e2f1480913e656be38981c31f6472b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.904045 kubelet[3455]: E1123 23:23:35.903964 3455 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a3e503f355a40345095b3d2eeaff0c7e2f1480913e656be38981c31f6472b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.904045 kubelet[3455]: E1123 23:23:35.904000 3455 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a3e503f355a40345095b3d2eeaff0c7e2f1480913e656be38981c31f6472b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-87pfq" Nov 23 23:23:35.904045 kubelet[3455]: E1123 23:23:35.904013 3455 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a3e503f355a40345095b3d2eeaff0c7e2f1480913e656be38981c31f6472b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-87pfq" Nov 23 23:23:35.904216 kubelet[3455]: E1123 23:23:35.904165 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-87pfq_kube-system(cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-87pfq_kube-system(cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2a3e503f355a40345095b3d2eeaff0c7e2f1480913e656be38981c31f6472b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-87pfq" podUID="cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9" Nov 23 23:23:35.925391 containerd[1892]: time="2025-11-23T23:23:35.925356831Z" level=error msg="Failed to destroy network for sandbox \"2bdc2ec111f152279cd5665c30f9c8aa765ab8d89281761c38651f9c729a6d37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.930916 containerd[1892]: time="2025-11-23T23:23:35.930889882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4f684bcb-cgczn,Uid:7afbc6db-85a8-4a42-8680-e685d44be238,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdc2ec111f152279cd5665c30f9c8aa765ab8d89281761c38651f9c729a6d37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.931201 kubelet[3455]: E1123 23:23:35.931171 3455 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdc2ec111f152279cd5665c30f9c8aa765ab8d89281761c38651f9c729a6d37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.931285 kubelet[3455]: E1123 23:23:35.931207 3455 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdc2ec111f152279cd5665c30f9c8aa765ab8d89281761c38651f9c729a6d37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" Nov 23 23:23:35.931285 kubelet[3455]: E1123 23:23:35.931220 3455 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdc2ec111f152279cd5665c30f9c8aa765ab8d89281761c38651f9c729a6d37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" Nov 23 23:23:35.931285 kubelet[3455]: E1123 23:23:35.931260 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d4f684bcb-cgczn_calico-apiserver(7afbc6db-85a8-4a42-8680-e685d44be238)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d4f684bcb-cgczn_calico-apiserver(7afbc6db-85a8-4a42-8680-e685d44be238)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bdc2ec111f152279cd5665c30f9c8aa765ab8d89281761c38651f9c729a6d37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" podUID="7afbc6db-85a8-4a42-8680-e685d44be238" Nov 23 23:23:35.934363 containerd[1892]: time="2025-11-23T23:23:35.934329426Z" level=error msg="Failed to destroy network for sandbox \"1fa95c872213b7f1a45466e337d48b0d951690bf8397c3e8a83578a951161886\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.937136 containerd[1892]: time="2025-11-23T23:23:35.937090667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-949496b99-r5rdh,Uid:99d699bb-d1ee-4f49-ad11-d74927648299,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa95c872213b7f1a45466e337d48b0d951690bf8397c3e8a83578a951161886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.937375 kubelet[3455]: E1123 23:23:35.937356 3455 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa95c872213b7f1a45466e337d48b0d951690bf8397c3e8a83578a951161886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.937541 kubelet[3455]: E1123 23:23:35.937446 3455 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa95c872213b7f1a45466e337d48b0d951690bf8397c3e8a83578a951161886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-949496b99-r5rdh" Nov 23 23:23:35.937541 kubelet[3455]: E1123 23:23:35.937466 3455 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fa95c872213b7f1a45466e337d48b0d951690bf8397c3e8a83578a951161886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-949496b99-r5rdh" Nov 23 23:23:35.937749 kubelet[3455]: E1123 23:23:35.937712 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-949496b99-r5rdh_calico-system(99d699bb-d1ee-4f49-ad11-d74927648299)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-949496b99-r5rdh_calico-system(99d699bb-d1ee-4f49-ad11-d74927648299)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fa95c872213b7f1a45466e337d48b0d951690bf8397c3e8a83578a951161886\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-949496b99-r5rdh" podUID="99d699bb-d1ee-4f49-ad11-d74927648299" Nov 23 23:23:35.938610 containerd[1892]: time="2025-11-23T23:23:35.938584307Z" level=error msg="Failed to destroy network for sandbox \"76535928ccdb8843165cc7d7379b26669d71ebc59828de1aa325258b12f4777b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.939726 containerd[1892]: time="2025-11-23T23:23:35.939609293Z" level=error msg="Failed to destroy network for sandbox \"a96260cc37f948dd1c413e9ba97b75578edd30007b66a265bc2771f6d52e85e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.941200 containerd[1892]: time="2025-11-23T23:23:35.941173303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4f684bcb-7dv89,Uid:f59281aa-b935-4d43-8373-69a621420431,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76535928ccdb8843165cc7d7379b26669d71ebc59828de1aa325258b12f4777b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.941530 kubelet[3455]: E1123 23:23:35.941496 3455 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76535928ccdb8843165cc7d7379b26669d71ebc59828de1aa325258b12f4777b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.941678 kubelet[3455]: E1123 23:23:35.941622 3455 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76535928ccdb8843165cc7d7379b26669d71ebc59828de1aa325258b12f4777b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" Nov 23 23:23:35.941678 kubelet[3455]: E1123 23:23:35.941647 3455 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76535928ccdb8843165cc7d7379b26669d71ebc59828de1aa325258b12f4777b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" Nov 23 23:23:35.941818 kubelet[3455]: E1123 23:23:35.941774 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d4f684bcb-7dv89_calico-apiserver(f59281aa-b935-4d43-8373-69a621420431)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d4f684bcb-7dv89_calico-apiserver(f59281aa-b935-4d43-8373-69a621420431)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76535928ccdb8843165cc7d7379b26669d71ebc59828de1aa325258b12f4777b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:23:35.943566 containerd[1892]: time="2025-11-23T23:23:35.943526931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hdfkh,Uid:be1e52dc-0aab-46fe-876a-12be408713eb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96260cc37f948dd1c413e9ba97b75578edd30007b66a265bc2771f6d52e85e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.943782 kubelet[3455]: E1123 23:23:35.943756 3455 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96260cc37f948dd1c413e9ba97b75578edd30007b66a265bc2771f6d52e85e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:35.943858 kubelet[3455]: E1123 23:23:35.943789 3455 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96260cc37f948dd1c413e9ba97b75578edd30007b66a265bc2771f6d52e85e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hdfkh" Nov 23 23:23:35.943858 kubelet[3455]: E1123 23:23:35.943803 3455 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96260cc37f948dd1c413e9ba97b75578edd30007b66a265bc2771f6d52e85e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-hdfkh" Nov 23 23:23:35.943858 kubelet[3455]: E1123 23:23:35.943829 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-hdfkh_calico-system(be1e52dc-0aab-46fe-876a-12be408713eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-hdfkh_calico-system(be1e52dc-0aab-46fe-876a-12be408713eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a96260cc37f948dd1c413e9ba97b75578edd30007b66a265bc2771f6d52e85e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-hdfkh" podUID="be1e52dc-0aab-46fe-876a-12be408713eb" Nov 23 23:23:36.172143 containerd[1892]: time="2025-11-23T23:23:36.171153206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 23:23:39.739700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2574716233.mount: Deactivated successfully. Nov 23 23:23:40.127852 containerd[1892]: time="2025-11-23T23:23:40.127446986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:40.130074 containerd[1892]: time="2025-11-23T23:23:40.130043990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 23:23:40.132696 containerd[1892]: time="2025-11-23T23:23:40.132654107Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:40.135896 containerd[1892]: time="2025-11-23T23:23:40.135830699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:40.136355 containerd[1892]: time="2025-11-23T23:23:40.136065114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.964241182s" Nov 23 23:23:40.136355 containerd[1892]: time="2025-11-23T23:23:40.136091331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 23:23:40.152578 containerd[1892]: time="2025-11-23T23:23:40.152556076Z" level=info msg="CreateContainer within sandbox \"31929bb4c0575490d0d9230d083a65d04febdd4887420df5d548e7b96c856ea5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 23:23:40.173094 containerd[1892]: time="2025-11-23T23:23:40.173068104Z" level=info msg="Container 110ffcd2108443772caf64e93b571307abe16d95caeaaa19a18a35b58a4efcdb: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:40.176042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2239067042.mount: Deactivated successfully. Nov 23 23:23:40.189835 containerd[1892]: time="2025-11-23T23:23:40.189805929Z" level=info msg="CreateContainer within sandbox \"31929bb4c0575490d0d9230d083a65d04febdd4887420df5d548e7b96c856ea5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"110ffcd2108443772caf64e93b571307abe16d95caeaaa19a18a35b58a4efcdb\"" Nov 23 23:23:40.190311 containerd[1892]: time="2025-11-23T23:23:40.190293001Z" level=info msg="StartContainer for \"110ffcd2108443772caf64e93b571307abe16d95caeaaa19a18a35b58a4efcdb\"" Nov 23 23:23:40.191838 containerd[1892]: time="2025-11-23T23:23:40.191813058Z" level=info msg="connecting to shim 110ffcd2108443772caf64e93b571307abe16d95caeaaa19a18a35b58a4efcdb" address="unix:///run/containerd/s/f3e113ecbb7ae136ab2f91580021ae5c48e60cb7475b9a0b5a87e96dacbb114a" protocol=ttrpc version=3 Nov 23 23:23:40.215415 systemd[1]: Started cri-containerd-110ffcd2108443772caf64e93b571307abe16d95caeaaa19a18a35b58a4efcdb.scope - libcontainer container 110ffcd2108443772caf64e93b571307abe16d95caeaaa19a18a35b58a4efcdb. Nov 23 23:23:40.272027 containerd[1892]: time="2025-11-23T23:23:40.271991190Z" level=info msg="StartContainer for \"110ffcd2108443772caf64e93b571307abe16d95caeaaa19a18a35b58a4efcdb\" returns successfully" Nov 23 23:23:40.544796 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 23:23:40.544917 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 23:23:40.701818 kubelet[3455]: I1123 23:23:40.701787 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d699bb-d1ee-4f49-ad11-d74927648299-whisker-ca-bundle\") pod \"99d699bb-d1ee-4f49-ad11-d74927648299\" (UID: \"99d699bb-d1ee-4f49-ad11-d74927648299\") " Nov 23 23:23:40.701818 kubelet[3455]: I1123 23:23:40.701823 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvr7k\" (UniqueName: \"kubernetes.io/projected/99d699bb-d1ee-4f49-ad11-d74927648299-kube-api-access-gvr7k\") pod \"99d699bb-d1ee-4f49-ad11-d74927648299\" (UID: \"99d699bb-d1ee-4f49-ad11-d74927648299\") " Nov 23 23:23:40.702581 kubelet[3455]: I1123 23:23:40.701842 3455 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/99d699bb-d1ee-4f49-ad11-d74927648299-whisker-backend-key-pair\") pod \"99d699bb-d1ee-4f49-ad11-d74927648299\" (UID: \"99d699bb-d1ee-4f49-ad11-d74927648299\") " Nov 23 23:23:40.702581 kubelet[3455]: I1123 23:23:40.702147 3455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99d699bb-d1ee-4f49-ad11-d74927648299-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "99d699bb-d1ee-4f49-ad11-d74927648299" (UID: "99d699bb-d1ee-4f49-ad11-d74927648299"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 23:23:40.705679 kubelet[3455]: I1123 23:23:40.705651 3455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99d699bb-d1ee-4f49-ad11-d74927648299-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "99d699bb-d1ee-4f49-ad11-d74927648299" (UID: "99d699bb-d1ee-4f49-ad11-d74927648299"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 23:23:40.706135 kubelet[3455]: I1123 23:23:40.706114 3455 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99d699bb-d1ee-4f49-ad11-d74927648299-kube-api-access-gvr7k" (OuterVolumeSpecName: "kube-api-access-gvr7k") pod "99d699bb-d1ee-4f49-ad11-d74927648299" (UID: "99d699bb-d1ee-4f49-ad11-d74927648299"). InnerVolumeSpecName "kube-api-access-gvr7k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 23:23:40.740518 systemd[1]: var-lib-kubelet-pods-99d699bb\x2dd1ee\x2d4f49\x2dad11\x2dd74927648299-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgvr7k.mount: Deactivated successfully. Nov 23 23:23:40.740593 systemd[1]: var-lib-kubelet-pods-99d699bb\x2dd1ee\x2d4f49\x2dad11\x2dd74927648299-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 23:23:40.803003 kubelet[3455]: I1123 23:23:40.802926 3455 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/99d699bb-d1ee-4f49-ad11-d74927648299-whisker-backend-key-pair\") on node \"ci-4459.2.1-a-856cba2a05\" DevicePath \"\"" Nov 23 23:23:40.803003 kubelet[3455]: I1123 23:23:40.802955 3455 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d699bb-d1ee-4f49-ad11-d74927648299-whisker-ca-bundle\") on node \"ci-4459.2.1-a-856cba2a05\" DevicePath \"\"" Nov 23 23:23:40.803003 kubelet[3455]: I1123 23:23:40.802963 3455 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gvr7k\" (UniqueName: \"kubernetes.io/projected/99d699bb-d1ee-4f49-ad11-d74927648299-kube-api-access-gvr7k\") on node \"ci-4459.2.1-a-856cba2a05\" DevicePath \"\"" Nov 23 23:23:41.194267 systemd[1]: Removed slice kubepods-besteffort-pod99d699bb_d1ee_4f49_ad11_d74927648299.slice - libcontainer container kubepods-besteffort-pod99d699bb_d1ee_4f49_ad11_d74927648299.slice. Nov 23 23:23:41.207384 kubelet[3455]: I1123 23:23:41.207019 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jvmbt" podStartSLOduration=2.179258985 podStartE2EDuration="15.207004755s" podCreationTimestamp="2025-11-23 23:23:26 +0000 UTC" firstStartedPulling="2025-11-23 23:23:27.108882115 +0000 UTC m=+19.105922232" lastFinishedPulling="2025-11-23 23:23:40.136627877 +0000 UTC m=+32.133668002" observedRunningTime="2025-11-23 23:23:41.206276691 +0000 UTC m=+33.203316824" watchObservedRunningTime="2025-11-23 23:23:41.207004755 +0000 UTC m=+33.204044872" Nov 23 23:23:41.275196 systemd[1]: Created slice kubepods-besteffort-podf26811a6_5336_4c1a_bfb3_9c8fe093c60c.slice - libcontainer container kubepods-besteffort-podf26811a6_5336_4c1a_bfb3_9c8fe093c60c.slice. Nov 23 23:23:41.306267 kubelet[3455]: I1123 23:23:41.306071 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f26811a6-5336-4c1a-bfb3-9c8fe093c60c-whisker-ca-bundle\") pod \"whisker-84cffccc6c-swdgq\" (UID: \"f26811a6-5336-4c1a-bfb3-9c8fe093c60c\") " pod="calico-system/whisker-84cffccc6c-swdgq" Nov 23 23:23:41.306267 kubelet[3455]: I1123 23:23:41.306114 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f26811a6-5336-4c1a-bfb3-9c8fe093c60c-whisker-backend-key-pair\") pod \"whisker-84cffccc6c-swdgq\" (UID: \"f26811a6-5336-4c1a-bfb3-9c8fe093c60c\") " pod="calico-system/whisker-84cffccc6c-swdgq" Nov 23 23:23:41.306267 kubelet[3455]: I1123 23:23:41.306127 3455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk6ms\" (UniqueName: \"kubernetes.io/projected/f26811a6-5336-4c1a-bfb3-9c8fe093c60c-kube-api-access-fk6ms\") pod \"whisker-84cffccc6c-swdgq\" (UID: \"f26811a6-5336-4c1a-bfb3-9c8fe093c60c\") " pod="calico-system/whisker-84cffccc6c-swdgq" Nov 23 23:23:41.580142 containerd[1892]: time="2025-11-23T23:23:41.580082029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84cffccc6c-swdgq,Uid:f26811a6-5336-4c1a-bfb3-9c8fe093c60c,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:41.675712 systemd-networkd[1465]: cali4ed5d100a1d: Link UP Nov 23 23:23:41.675857 systemd-networkd[1465]: cali4ed5d100a1d: Gained carrier Nov 23 23:23:41.693009 containerd[1892]: 2025-11-23 23:23:41.599 [INFO][4514] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:23:41.693009 containerd[1892]: 2025-11-23 23:23:41.622 [INFO][4514] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-eth0 whisker-84cffccc6c- calico-system f26811a6-5336-4c1a-bfb3-9c8fe093c60c 876 0 2025-11-23 23:23:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84cffccc6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.1-a-856cba2a05 whisker-84cffccc6c-swdgq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4ed5d100a1d [] [] }} ContainerID="bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" Namespace="calico-system" Pod="whisker-84cffccc6c-swdgq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-" Nov 23 23:23:41.693009 containerd[1892]: 2025-11-23 23:23:41.622 [INFO][4514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" Namespace="calico-system" Pod="whisker-84cffccc6c-swdgq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-eth0" Nov 23 23:23:41.693009 containerd[1892]: 2025-11-23 23:23:41.640 [INFO][4525] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" HandleID="k8s-pod-network.bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" Workload="ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-eth0" Nov 23 23:23:41.693399 containerd[1892]: 2025-11-23 23:23:41.640 [INFO][4525] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" HandleID="k8s-pod-network.bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" Workload="ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-856cba2a05", "pod":"whisker-84cffccc6c-swdgq", "timestamp":"2025-11-23 23:23:41.640259326 +0000 UTC"}, Hostname:"ci-4459.2.1-a-856cba2a05", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:41.693399 containerd[1892]: 2025-11-23 23:23:41.640 [INFO][4525] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:41.693399 containerd[1892]: 2025-11-23 23:23:41.640 [INFO][4525] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:41.693399 containerd[1892]: 2025-11-23 23:23:41.640 [INFO][4525] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-856cba2a05' Nov 23 23:23:41.693399 containerd[1892]: 2025-11-23 23:23:41.645 [INFO][4525] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:41.693399 containerd[1892]: 2025-11-23 23:23:41.649 [INFO][4525] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:41.693399 containerd[1892]: 2025-11-23 23:23:41.652 [INFO][4525] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:41.693399 containerd[1892]: 2025-11-23 23:23:41.653 [INFO][4525] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:41.693399 containerd[1892]: 2025-11-23 23:23:41.654 [INFO][4525] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:41.694159 containerd[1892]: 2025-11-23 23:23:41.655 [INFO][4525] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:41.694159 containerd[1892]: 2025-11-23 23:23:41.656 [INFO][4525] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815 Nov 23 23:23:41.694159 containerd[1892]: 2025-11-23 23:23:41.660 [INFO][4525] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:41.694159 containerd[1892]: 2025-11-23 23:23:41.667 [INFO][4525] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.129/26] block=192.168.12.128/26 handle="k8s-pod-network.bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:41.694159 containerd[1892]: 2025-11-23 23:23:41.668 [INFO][4525] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.129/26] handle="k8s-pod-network.bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:41.694159 containerd[1892]: 2025-11-23 23:23:41.668 [INFO][4525] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:41.694159 containerd[1892]: 2025-11-23 23:23:41.668 [INFO][4525] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.129/26] IPv6=[] ContainerID="bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" HandleID="k8s-pod-network.bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" Workload="ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-eth0" Nov 23 23:23:41.694337 containerd[1892]: 2025-11-23 23:23:41.670 [INFO][4514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" Namespace="calico-system" Pod="whisker-84cffccc6c-swdgq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-eth0", GenerateName:"whisker-84cffccc6c-", Namespace:"calico-system", SelfLink:"", UID:"f26811a6-5336-4c1a-bfb3-9c8fe093c60c", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84cffccc6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"", Pod:"whisker-84cffccc6c-swdgq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4ed5d100a1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:41.694337 containerd[1892]: 2025-11-23 23:23:41.670 [INFO][4514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.129/32] ContainerID="bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" Namespace="calico-system" Pod="whisker-84cffccc6c-swdgq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-eth0" Nov 23 23:23:41.694435 containerd[1892]: 2025-11-23 23:23:41.670 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ed5d100a1d ContainerID="bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" Namespace="calico-system" Pod="whisker-84cffccc6c-swdgq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-eth0" Nov 23 23:23:41.694435 containerd[1892]: 2025-11-23 23:23:41.676 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" Namespace="calico-system" Pod="whisker-84cffccc6c-swdgq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-eth0" Nov 23 23:23:41.694502 containerd[1892]: 2025-11-23 23:23:41.676 [INFO][4514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" Namespace="calico-system" Pod="whisker-84cffccc6c-swdgq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-eth0", GenerateName:"whisker-84cffccc6c-", Namespace:"calico-system", SelfLink:"", UID:"f26811a6-5336-4c1a-bfb3-9c8fe093c60c", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84cffccc6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815", Pod:"whisker-84cffccc6c-swdgq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4ed5d100a1d", MAC:"ba:dd:b1:6b:d7:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:41.694573 containerd[1892]: 2025-11-23 23:23:41.688 [INFO][4514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" Namespace="calico-system" Pod="whisker-84cffccc6c-swdgq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-whisker--84cffccc6c--swdgq-eth0" Nov 23 23:23:41.732425 containerd[1892]: time="2025-11-23T23:23:41.732363190Z" level=info msg="connecting to shim bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815" address="unix:///run/containerd/s/ef83e464710178b75184febbc169185d4445cdbd59401c3dee227a996819687b" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:41.756366 systemd[1]: Started cri-containerd-bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815.scope - libcontainer container bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815. Nov 23 23:23:41.785035 containerd[1892]: time="2025-11-23T23:23:41.784983032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84cffccc6c-swdgq,Uid:f26811a6-5336-4c1a-bfb3-9c8fe093c60c,Namespace:calico-system,Attempt:0,} returns sandbox id \"bd38bc0a10496ee88f11b509cfac566639192b9ca2b5150865412d7ecdf1a815\"" Nov 23 23:23:41.787314 containerd[1892]: time="2025-11-23T23:23:41.787286971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:23:42.033568 containerd[1892]: time="2025-11-23T23:23:42.033529225Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:42.095794 kubelet[3455]: I1123 23:23:42.095760 3455 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99d699bb-d1ee-4f49-ad11-d74927648299" path="/var/lib/kubelet/pods/99d699bb-d1ee-4f49-ad11-d74927648299/volumes" Nov 23 23:23:42.458159 containerd[1892]: time="2025-11-23T23:23:42.457989677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:23:42.458159 containerd[1892]: time="2025-11-23T23:23:42.458017838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:23:42.458483 kubelet[3455]: E1123 23:23:42.458443 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:23:42.458593 kubelet[3455]: E1123 23:23:42.458565 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:23:42.463535 kubelet[3455]: E1123 23:23:42.463495 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c6e586d421c470cad8a5776d76af0cb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fk6ms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84cffccc6c-swdgq_calico-system(f26811a6-5336-4c1a-bfb3-9c8fe093c60c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:42.465643 containerd[1892]: time="2025-11-23T23:23:42.465588613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:23:42.593739 systemd-networkd[1465]: vxlan.calico: Link UP Nov 23 23:23:42.593746 systemd-networkd[1465]: vxlan.calico: Gained carrier Nov 23 23:23:42.746338 systemd-networkd[1465]: cali4ed5d100a1d: Gained IPv6LL Nov 23 23:23:42.747119 containerd[1892]: time="2025-11-23T23:23:42.747006621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:42.763111 containerd[1892]: time="2025-11-23T23:23:42.763065896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:23:42.763182 containerd[1892]: time="2025-11-23T23:23:42.763152459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:23:42.763541 kubelet[3455]: E1123 23:23:42.763321 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:23:42.763541 kubelet[3455]: E1123 23:23:42.763370 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:23:42.763647 kubelet[3455]: E1123 23:23:42.763486 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk6ms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84cffccc6c-swdgq_calico-system(f26811a6-5336-4c1a-bfb3-9c8fe093c60c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:42.764874 kubelet[3455]: E1123 23:23:42.764784 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cffccc6c-swdgq" podUID="f26811a6-5336-4c1a-bfb3-9c8fe093c60c" Nov 23 23:23:43.193874 kubelet[3455]: E1123 23:23:43.193831 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cffccc6c-swdgq" podUID="f26811a6-5336-4c1a-bfb3-9c8fe093c60c" Nov 23 23:23:43.834402 systemd-networkd[1465]: vxlan.calico: Gained IPv6LL Nov 23 23:23:47.093857 containerd[1892]: time="2025-11-23T23:23:47.093815035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4f684bcb-7dv89,Uid:f59281aa-b935-4d43-8373-69a621420431,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:23:47.094185 containerd[1892]: time="2025-11-23T23:23:47.094156118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzswh,Uid:dc4e36a9-c245-455d-ada2-16c405b7bde8,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:47.213198 systemd-networkd[1465]: cali9494d5c666d: Link UP Nov 23 23:23:47.214447 systemd-networkd[1465]: cali9494d5c666d: Gained carrier Nov 23 23:23:47.230963 containerd[1892]: 2025-11-23 23:23:47.148 [INFO][4788] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-eth0 csi-node-driver- calico-system dc4e36a9-c245-455d-ada2-16c405b7bde8 700 0 2025-11-23 23:23:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.1-a-856cba2a05 csi-node-driver-mzswh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9494d5c666d [] [] }} ContainerID="c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" Namespace="calico-system" Pod="csi-node-driver-mzswh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-" Nov 23 23:23:47.230963 containerd[1892]: 2025-11-23 23:23:47.149 [INFO][4788] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" Namespace="calico-system" Pod="csi-node-driver-mzswh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-eth0" Nov 23 23:23:47.230963 containerd[1892]: 2025-11-23 23:23:47.174 [INFO][4802] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" HandleID="k8s-pod-network.c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" Workload="ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-eth0" Nov 23 23:23:47.231160 containerd[1892]: 2025-11-23 23:23:47.174 [INFO][4802] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" HandleID="k8s-pod-network.c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" Workload="ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d38b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-856cba2a05", "pod":"csi-node-driver-mzswh", "timestamp":"2025-11-23 23:23:47.174140348 +0000 UTC"}, Hostname:"ci-4459.2.1-a-856cba2a05", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:47.231160 containerd[1892]: 2025-11-23 23:23:47.174 [INFO][4802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:47.231160 containerd[1892]: 2025-11-23 23:23:47.174 [INFO][4802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:47.231160 containerd[1892]: 2025-11-23 23:23:47.174 [INFO][4802] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-856cba2a05' Nov 23 23:23:47.231160 containerd[1892]: 2025-11-23 23:23:47.180 [INFO][4802] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.231160 containerd[1892]: 2025-11-23 23:23:47.185 [INFO][4802] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.231160 containerd[1892]: 2025-11-23 23:23:47.188 [INFO][4802] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.231160 containerd[1892]: 2025-11-23 23:23:47.189 [INFO][4802] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.231160 containerd[1892]: 2025-11-23 23:23:47.190 [INFO][4802] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.231393 containerd[1892]: 2025-11-23 23:23:47.190 [INFO][4802] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.231393 containerd[1892]: 2025-11-23 23:23:47.192 [INFO][4802] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417 Nov 23 23:23:47.231393 containerd[1892]: 2025-11-23 23:23:47.199 [INFO][4802] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.231393 containerd[1892]: 2025-11-23 23:23:47.204 [INFO][4802] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.130/26] block=192.168.12.128/26 handle="k8s-pod-network.c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.231393 containerd[1892]: 2025-11-23 23:23:47.204 [INFO][4802] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.130/26] handle="k8s-pod-network.c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.231393 containerd[1892]: 2025-11-23 23:23:47.204 [INFO][4802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:47.231393 containerd[1892]: 2025-11-23 23:23:47.204 [INFO][4802] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.130/26] IPv6=[] ContainerID="c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" HandleID="k8s-pod-network.c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" Workload="ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-eth0" Nov 23 23:23:47.231491 containerd[1892]: 2025-11-23 23:23:47.206 [INFO][4788] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" Namespace="calico-system" Pod="csi-node-driver-mzswh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dc4e36a9-c245-455d-ada2-16c405b7bde8", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"", Pod:"csi-node-driver-mzswh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9494d5c666d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:47.231528 containerd[1892]: 2025-11-23 23:23:47.206 [INFO][4788] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.130/32] ContainerID="c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" Namespace="calico-system" Pod="csi-node-driver-mzswh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-eth0" Nov 23 23:23:47.231528 containerd[1892]: 2025-11-23 23:23:47.206 [INFO][4788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9494d5c666d ContainerID="c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" Namespace="calico-system" Pod="csi-node-driver-mzswh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-eth0" Nov 23 23:23:47.231528 containerd[1892]: 2025-11-23 23:23:47.215 [INFO][4788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" Namespace="calico-system" Pod="csi-node-driver-mzswh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-eth0" Nov 23 23:23:47.231570 containerd[1892]: 2025-11-23 23:23:47.215 [INFO][4788] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" Namespace="calico-system" Pod="csi-node-driver-mzswh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dc4e36a9-c245-455d-ada2-16c405b7bde8", ResourceVersion:"700", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417", Pod:"csi-node-driver-mzswh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9494d5c666d", MAC:"ce:f2:5b:fe:f9:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:47.231603 containerd[1892]: 2025-11-23 23:23:47.229 [INFO][4788] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" Namespace="calico-system" Pod="csi-node-driver-mzswh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-csi--node--driver--mzswh-eth0" Nov 23 23:23:47.271394 containerd[1892]: time="2025-11-23T23:23:47.271326322Z" level=info msg="connecting to shim c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417" address="unix:///run/containerd/s/e8fbfd6ba9c2fab5ad38ad2128b667062e2e541f50a2612a8948e9a150b82fae" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:47.292542 systemd[1]: Started cri-containerd-c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417.scope - libcontainer container c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417. Nov 23 23:23:47.318188 containerd[1892]: time="2025-11-23T23:23:47.317850798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mzswh,Uid:dc4e36a9-c245-455d-ada2-16c405b7bde8,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0e44907ef353c5e04086bc90be30a8feda443566ce232de906f90aae1bca417\"" Nov 23 23:23:47.320874 systemd-networkd[1465]: cali686f953970b: Link UP Nov 23 23:23:47.321905 containerd[1892]: time="2025-11-23T23:23:47.321358936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:23:47.322370 systemd-networkd[1465]: cali686f953970b: Gained carrier Nov 23 23:23:47.338485 containerd[1892]: 2025-11-23 23:23:47.153 [INFO][4779] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-eth0 calico-apiserver-5d4f684bcb- calico-apiserver f59281aa-b935-4d43-8373-69a621420431 814 0 2025-11-23 23:23:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d4f684bcb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-a-856cba2a05 calico-apiserver-5d4f684bcb-7dv89 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali686f953970b [] [] }} ContainerID="7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-7dv89" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-" Nov 23 23:23:47.338485 containerd[1892]: 2025-11-23 23:23:47.153 [INFO][4779] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-7dv89" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-eth0" Nov 23 23:23:47.338485 containerd[1892]: 2025-11-23 23:23:47.182 [INFO][4807] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" HandleID="k8s-pod-network.7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" Workload="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-eth0" Nov 23 23:23:47.338614 containerd[1892]: 2025-11-23 23:23:47.182 [INFO][4807] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" HandleID="k8s-pod-network.7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" Workload="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-a-856cba2a05", "pod":"calico-apiserver-5d4f684bcb-7dv89", "timestamp":"2025-11-23 23:23:47.182478844 +0000 UTC"}, Hostname:"ci-4459.2.1-a-856cba2a05", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:47.338614 containerd[1892]: 2025-11-23 23:23:47.182 [INFO][4807] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:47.338614 containerd[1892]: 2025-11-23 23:23:47.204 [INFO][4807] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:47.338614 containerd[1892]: 2025-11-23 23:23:47.204 [INFO][4807] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-856cba2a05' Nov 23 23:23:47.338614 containerd[1892]: 2025-11-23 23:23:47.280 [INFO][4807] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.338614 containerd[1892]: 2025-11-23 23:23:47.289 [INFO][4807] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.338614 containerd[1892]: 2025-11-23 23:23:47.295 [INFO][4807] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.338614 containerd[1892]: 2025-11-23 23:23:47.297 [INFO][4807] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.338614 containerd[1892]: 2025-11-23 23:23:47.298 [INFO][4807] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.338755 containerd[1892]: 2025-11-23 23:23:47.298 [INFO][4807] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.338755 containerd[1892]: 2025-11-23 23:23:47.299 [INFO][4807] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7 Nov 23 23:23:47.338755 containerd[1892]: 2025-11-23 23:23:47.306 [INFO][4807] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.338755 containerd[1892]: 2025-11-23 23:23:47.315 [INFO][4807] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.131/26] block=192.168.12.128/26 handle="k8s-pod-network.7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.338755 containerd[1892]: 2025-11-23 23:23:47.315 [INFO][4807] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.131/26] handle="k8s-pod-network.7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:47.338755 containerd[1892]: 2025-11-23 23:23:47.315 [INFO][4807] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:47.338755 containerd[1892]: 2025-11-23 23:23:47.315 [INFO][4807] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.131/26] IPv6=[] ContainerID="7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" HandleID="k8s-pod-network.7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" Workload="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-eth0" Nov 23 23:23:47.338860 containerd[1892]: 2025-11-23 23:23:47.318 [INFO][4779] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-7dv89" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-eth0", GenerateName:"calico-apiserver-5d4f684bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"f59281aa-b935-4d43-8373-69a621420431", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d4f684bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"", Pod:"calico-apiserver-5d4f684bcb-7dv89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali686f953970b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:47.338895 containerd[1892]: 2025-11-23 23:23:47.318 [INFO][4779] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.131/32] ContainerID="7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-7dv89" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-eth0" Nov 23 23:23:47.338895 containerd[1892]: 2025-11-23 23:23:47.318 [INFO][4779] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali686f953970b ContainerID="7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-7dv89" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-eth0" Nov 23 23:23:47.338895 containerd[1892]: 2025-11-23 23:23:47.323 [INFO][4779] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-7dv89" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-eth0" Nov 23 23:23:47.338937 containerd[1892]: 2025-11-23 23:23:47.323 [INFO][4779] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-7dv89" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-eth0", GenerateName:"calico-apiserver-5d4f684bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"f59281aa-b935-4d43-8373-69a621420431", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d4f684bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7", Pod:"calico-apiserver-5d4f684bcb-7dv89", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali686f953970b", MAC:"72:4f:9a:f8:84:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:47.338970 containerd[1892]: 2025-11-23 23:23:47.335 [INFO][4779] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-7dv89" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--7dv89-eth0" Nov 23 23:23:47.370835 containerd[1892]: time="2025-11-23T23:23:47.370738617Z" level=info msg="connecting to shim 7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7" address="unix:///run/containerd/s/8144d3cd5c097bccc213cefb4db2555aaf47b02e6bcdc720631a787b79fa7583" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:47.388355 systemd[1]: Started cri-containerd-7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7.scope - libcontainer container 7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7. Nov 23 23:23:47.416784 containerd[1892]: time="2025-11-23T23:23:47.416666793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4f684bcb-7dv89,Uid:f59281aa-b935-4d43-8373-69a621420431,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7de64a65c7529ab5d5686b30334f4e3121b3f90a9e3f7a599767b012d6ddc3a7\"" Nov 23 23:23:47.559432 containerd[1892]: time="2025-11-23T23:23:47.559304079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:47.561813 containerd[1892]: time="2025-11-23T23:23:47.561765144Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:23:47.562073 containerd[1892]: time="2025-11-23T23:23:47.561805689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:23:47.562142 kubelet[3455]: E1123 23:23:47.562097 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:23:47.562433 kubelet[3455]: E1123 23:23:47.562159 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:23:47.562433 kubelet[3455]: E1123 23:23:47.562348 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptkmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzswh_calico-system(dc4e36a9-c245-455d-ada2-16c405b7bde8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:47.562833 containerd[1892]: time="2025-11-23T23:23:47.562721927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:23:47.912185 containerd[1892]: time="2025-11-23T23:23:47.912151045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:47.914897 containerd[1892]: time="2025-11-23T23:23:47.914821910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:23:47.914897 containerd[1892]: time="2025-11-23T23:23:47.914864640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:23:47.915028 kubelet[3455]: E1123 23:23:47.914985 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:47.915028 kubelet[3455]: E1123 23:23:47.915023 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:47.915351 kubelet[3455]: E1123 23:23:47.915224 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlrtt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d4f684bcb-7dv89_calico-apiserver(f59281aa-b935-4d43-8373-69a621420431): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:47.916439 kubelet[3455]: E1123 23:23:47.916406 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:23:47.916524 containerd[1892]: time="2025-11-23T23:23:47.916493401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:23:48.095169 containerd[1892]: time="2025-11-23T23:23:48.094553026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56586bc88d-xbb5n,Uid:15476924-84af-4e25-8a82-221156412f5f,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:48.095169 containerd[1892]: time="2025-11-23T23:23:48.094885340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhkjl,Uid:6b2f94be-84e3-478d-b100-8fe53aa25912,Namespace:kube-system,Attempt:0,}" Nov 23 23:23:48.168969 containerd[1892]: time="2025-11-23T23:23:48.168887660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:48.171571 containerd[1892]: time="2025-11-23T23:23:48.171541077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:23:48.171762 containerd[1892]: time="2025-11-23T23:23:48.171584470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:23:48.171925 kubelet[3455]: E1123 23:23:48.171899 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:23:48.172764 kubelet[3455]: E1123 23:23:48.172693 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:23:48.172925 kubelet[3455]: E1123 23:23:48.172856 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptkmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzswh_calico-system(dc4e36a9-c245-455d-ada2-16c405b7bde8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:48.174469 kubelet[3455]: E1123 23:23:48.174412 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:23:48.210639 kubelet[3455]: E1123 23:23:48.210581 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:23:48.210732 kubelet[3455]: E1123 23:23:48.210670 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:23:48.233646 systemd-networkd[1465]: caliaeaacfb1f95: Link UP Nov 23 23:23:48.235983 systemd-networkd[1465]: caliaeaacfb1f95: Gained carrier Nov 23 23:23:48.257743 containerd[1892]: 2025-11-23 23:23:48.151 [INFO][4941] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-eth0 coredns-674b8bbfcf- kube-system 6b2f94be-84e3-478d-b100-8fe53aa25912 810 0 2025-11-23 23:23:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.1-a-856cba2a05 coredns-674b8bbfcf-dhkjl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaeaacfb1f95 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhkjl" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-" Nov 23 23:23:48.257743 containerd[1892]: 2025-11-23 23:23:48.151 [INFO][4941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhkjl" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-eth0" Nov 23 23:23:48.257743 containerd[1892]: 2025-11-23 23:23:48.172 [INFO][4955] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" HandleID="k8s-pod-network.cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" Workload="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-eth0" Nov 23 23:23:48.257868 containerd[1892]: 2025-11-23 23:23:48.173 [INFO][4955] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" HandleID="k8s-pod-network.cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" Workload="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3910), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.1-a-856cba2a05", "pod":"coredns-674b8bbfcf-dhkjl", "timestamp":"2025-11-23 23:23:48.17256062 +0000 UTC"}, Hostname:"ci-4459.2.1-a-856cba2a05", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:48.257868 containerd[1892]: 2025-11-23 23:23:48.173 [INFO][4955] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:48.257868 containerd[1892]: 2025-11-23 23:23:48.173 [INFO][4955] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:48.257868 containerd[1892]: 2025-11-23 23:23:48.173 [INFO][4955] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-856cba2a05' Nov 23 23:23:48.257868 containerd[1892]: 2025-11-23 23:23:48.183 [INFO][4955] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.257868 containerd[1892]: 2025-11-23 23:23:48.187 [INFO][4955] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.257868 containerd[1892]: 2025-11-23 23:23:48.190 [INFO][4955] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.257868 containerd[1892]: 2025-11-23 23:23:48.192 [INFO][4955] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.257868 containerd[1892]: 2025-11-23 23:23:48.193 [INFO][4955] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.257998 containerd[1892]: 2025-11-23 23:23:48.194 [INFO][4955] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.257998 containerd[1892]: 2025-11-23 23:23:48.195 [INFO][4955] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8 Nov 23 23:23:48.257998 containerd[1892]: 2025-11-23 23:23:48.201 [INFO][4955] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.257998 containerd[1892]: 2025-11-23 23:23:48.212 [INFO][4955] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.132/26] block=192.168.12.128/26 handle="k8s-pod-network.cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.257998 containerd[1892]: 2025-11-23 23:23:48.212 [INFO][4955] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.132/26] handle="k8s-pod-network.cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.257998 containerd[1892]: 2025-11-23 23:23:48.212 [INFO][4955] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:48.257998 containerd[1892]: 2025-11-23 23:23:48.212 [INFO][4955] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.132/26] IPv6=[] ContainerID="cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" HandleID="k8s-pod-network.cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" Workload="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-eth0" Nov 23 23:23:48.258089 containerd[1892]: 2025-11-23 23:23:48.215 [INFO][4941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhkjl" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6b2f94be-84e3-478d-b100-8fe53aa25912", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"", Pod:"coredns-674b8bbfcf-dhkjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaeaacfb1f95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:48.258089 containerd[1892]: 2025-11-23 23:23:48.215 [INFO][4941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.132/32] ContainerID="cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhkjl" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-eth0" Nov 23 23:23:48.258089 containerd[1892]: 2025-11-23 23:23:48.215 [INFO][4941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaeaacfb1f95 ContainerID="cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhkjl" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-eth0" Nov 23 23:23:48.258089 containerd[1892]: 2025-11-23 23:23:48.235 [INFO][4941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhkjl" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-eth0" Nov 23 23:23:48.258089 containerd[1892]: 2025-11-23 23:23:48.239 [INFO][4941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhkjl" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6b2f94be-84e3-478d-b100-8fe53aa25912", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8", Pod:"coredns-674b8bbfcf-dhkjl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaeaacfb1f95", MAC:"02:7f:a6:b4:7e:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:48.258089 containerd[1892]: 2025-11-23 23:23:48.255 [INFO][4941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" Namespace="kube-system" Pod="coredns-674b8bbfcf-dhkjl" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--dhkjl-eth0" Nov 23 23:23:48.301436 containerd[1892]: time="2025-11-23T23:23:48.301380749Z" level=info msg="connecting to shim cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8" address="unix:///run/containerd/s/949fb2e600e8a0b8b6b5473ab202a00535f56470c8d81f0f52357f7310d4c6b8" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:48.325310 systemd-networkd[1465]: cali670b927c1db: Link UP Nov 23 23:23:48.326099 systemd-networkd[1465]: cali670b927c1db: Gained carrier Nov 23 23:23:48.330374 systemd[1]: Started cri-containerd-cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8.scope - libcontainer container cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8. Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.152 [INFO][4931] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-eth0 calico-kube-controllers-56586bc88d- calico-system 15476924-84af-4e25-8a82-221156412f5f 812 0 2025-11-23 23:23:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56586bc88d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.1-a-856cba2a05 calico-kube-controllers-56586bc88d-xbb5n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali670b927c1db [] [] }} ContainerID="56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" Namespace="calico-system" Pod="calico-kube-controllers-56586bc88d-xbb5n" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-" Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.152 [INFO][4931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" Namespace="calico-system" Pod="calico-kube-controllers-56586bc88d-xbb5n" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-eth0" Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.174 [INFO][4957] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" HandleID="k8s-pod-network.56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" Workload="ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-eth0" Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.175 [INFO][4957] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" HandleID="k8s-pod-network.56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" Workload="ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-856cba2a05", "pod":"calico-kube-controllers-56586bc88d-xbb5n", "timestamp":"2025-11-23 23:23:48.174956653 +0000 UTC"}, Hostname:"ci-4459.2.1-a-856cba2a05", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.175 [INFO][4957] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.212 [INFO][4957] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.213 [INFO][4957] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-856cba2a05' Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.283 [INFO][4957] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.288 [INFO][4957] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.294 [INFO][4957] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.296 [INFO][4957] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.299 [INFO][4957] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.300 [INFO][4957] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.302 [INFO][4957] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0 Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.307 [INFO][4957] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.318 [INFO][4957] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.133/26] block=192.168.12.128/26 handle="k8s-pod-network.56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.318 [INFO][4957] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.133/26] handle="k8s-pod-network.56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.318 [INFO][4957] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:48.342757 containerd[1892]: 2025-11-23 23:23:48.318 [INFO][4957] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.133/26] IPv6=[] ContainerID="56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" HandleID="k8s-pod-network.56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" Workload="ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-eth0" Nov 23 23:23:48.343593 containerd[1892]: 2025-11-23 23:23:48.321 [INFO][4931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" Namespace="calico-system" Pod="calico-kube-controllers-56586bc88d-xbb5n" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-eth0", GenerateName:"calico-kube-controllers-56586bc88d-", Namespace:"calico-system", SelfLink:"", UID:"15476924-84af-4e25-8a82-221156412f5f", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56586bc88d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"", Pod:"calico-kube-controllers-56586bc88d-xbb5n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali670b927c1db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:48.343593 containerd[1892]: 2025-11-23 23:23:48.321 [INFO][4931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.133/32] ContainerID="56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" Namespace="calico-system" Pod="calico-kube-controllers-56586bc88d-xbb5n" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-eth0" Nov 23 23:23:48.343593 containerd[1892]: 2025-11-23 23:23:48.321 [INFO][4931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali670b927c1db ContainerID="56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" Namespace="calico-system" Pod="calico-kube-controllers-56586bc88d-xbb5n" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-eth0" Nov 23 23:23:48.343593 containerd[1892]: 2025-11-23 23:23:48.327 [INFO][4931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" Namespace="calico-system" Pod="calico-kube-controllers-56586bc88d-xbb5n" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-eth0" Nov 23 23:23:48.343593 containerd[1892]: 2025-11-23 23:23:48.327 [INFO][4931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" Namespace="calico-system" Pod="calico-kube-controllers-56586bc88d-xbb5n" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-eth0", GenerateName:"calico-kube-controllers-56586bc88d-", Namespace:"calico-system", SelfLink:"", UID:"15476924-84af-4e25-8a82-221156412f5f", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56586bc88d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0", Pod:"calico-kube-controllers-56586bc88d-xbb5n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali670b927c1db", MAC:"4a:e0:68:5f:cb:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:48.343593 containerd[1892]: 2025-11-23 23:23:48.340 [INFO][4931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" Namespace="calico-system" Pod="calico-kube-controllers-56586bc88d-xbb5n" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--kube--controllers--56586bc88d--xbb5n-eth0" Nov 23 23:23:48.379870 containerd[1892]: time="2025-11-23T23:23:48.379833188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dhkjl,Uid:6b2f94be-84e3-478d-b100-8fe53aa25912,Namespace:kube-system,Attempt:0,} returns sandbox id \"cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8\"" Nov 23 23:23:48.382545 containerd[1892]: time="2025-11-23T23:23:48.382498461Z" level=info msg="connecting to shim 56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0" address="unix:///run/containerd/s/55b84f47cbcdb5417d83a6eea8b86a9548dcbdd48426fabe8dbc60220e77fe89" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:48.389955 containerd[1892]: time="2025-11-23T23:23:48.389831996Z" level=info msg="CreateContainer within sandbox \"cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:23:48.405380 systemd[1]: Started cri-containerd-56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0.scope - libcontainer container 56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0. Nov 23 23:23:48.406798 containerd[1892]: time="2025-11-23T23:23:48.406731189Z" level=info msg="Container 3cc969742bf4565a7b51c4222db92f061a0348aad57dc3e886ecbfe532a319a9: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:48.420234 containerd[1892]: time="2025-11-23T23:23:48.420114860Z" level=info msg="CreateContainer within sandbox \"cad57267da44bfb8a126b282fa4e84cef08604d34047cfac46760e6abc100ec8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3cc969742bf4565a7b51c4222db92f061a0348aad57dc3e886ecbfe532a319a9\"" Nov 23 23:23:48.421963 containerd[1892]: time="2025-11-23T23:23:48.421937395Z" level=info msg="StartContainer for \"3cc969742bf4565a7b51c4222db92f061a0348aad57dc3e886ecbfe532a319a9\"" Nov 23 23:23:48.423489 containerd[1892]: time="2025-11-23T23:23:48.422859871Z" level=info msg="connecting to shim 3cc969742bf4565a7b51c4222db92f061a0348aad57dc3e886ecbfe532a319a9" address="unix:///run/containerd/s/949fb2e600e8a0b8b6b5473ab202a00535f56470c8d81f0f52357f7310d4c6b8" protocol=ttrpc version=3 Nov 23 23:23:48.446482 systemd[1]: Started cri-containerd-3cc969742bf4565a7b51c4222db92f061a0348aad57dc3e886ecbfe532a319a9.scope - libcontainer container 3cc969742bf4565a7b51c4222db92f061a0348aad57dc3e886ecbfe532a319a9. Nov 23 23:23:48.450937 containerd[1892]: time="2025-11-23T23:23:48.450890547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56586bc88d-xbb5n,Uid:15476924-84af-4e25-8a82-221156412f5f,Namespace:calico-system,Attempt:0,} returns sandbox id \"56ff4c6dda6c383a65ef0b9897221419abbefef95e32e857c1a990c4b18676e0\"" Nov 23 23:23:48.452829 containerd[1892]: time="2025-11-23T23:23:48.452724114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:23:48.480484 containerd[1892]: time="2025-11-23T23:23:48.480412931Z" level=info msg="StartContainer for \"3cc969742bf4565a7b51c4222db92f061a0348aad57dc3e886ecbfe532a319a9\" returns successfully" Nov 23 23:23:48.686669 containerd[1892]: time="2025-11-23T23:23:48.686551017Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:48.689103 containerd[1892]: time="2025-11-23T23:23:48.689068597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:23:48.689272 containerd[1892]: time="2025-11-23T23:23:48.689076374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:23:48.689414 kubelet[3455]: E1123 23:23:48.689362 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:23:48.689687 kubelet[3455]: E1123 23:23:48.689427 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:23:48.689687 kubelet[3455]: E1123 23:23:48.689564 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56586bc88d-xbb5n_calico-system(15476924-84af-4e25-8a82-221156412f5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:48.690814 kubelet[3455]: E1123 23:23:48.690772 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f" Nov 23 23:23:48.762350 systemd-networkd[1465]: cali686f953970b: Gained IPv6LL Nov 23 23:23:49.018394 systemd-networkd[1465]: cali9494d5c666d: Gained IPv6LL Nov 23 23:23:49.094746 containerd[1892]: time="2025-11-23T23:23:49.094698624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-87pfq,Uid:cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9,Namespace:kube-system,Attempt:0,}" Nov 23 23:23:49.190936 systemd-networkd[1465]: cali626b0cf6354: Link UP Nov 23 23:23:49.191765 systemd-networkd[1465]: cali626b0cf6354: Gained carrier Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.133 [INFO][5116] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-eth0 coredns-674b8bbfcf- kube-system cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9 811 0 2025-11-23 23:23:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.1-a-856cba2a05 coredns-674b8bbfcf-87pfq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali626b0cf6354 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" Namespace="kube-system" Pod="coredns-674b8bbfcf-87pfq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-" Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.133 [INFO][5116] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" Namespace="kube-system" Pod="coredns-674b8bbfcf-87pfq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-eth0" Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.154 [INFO][5128] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" HandleID="k8s-pod-network.118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" Workload="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-eth0" Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.155 [INFO][5128] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" HandleID="k8s-pod-network.118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" Workload="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b6c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.1-a-856cba2a05", "pod":"coredns-674b8bbfcf-87pfq", "timestamp":"2025-11-23 23:23:49.154716959 +0000 UTC"}, Hostname:"ci-4459.2.1-a-856cba2a05", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.155 [INFO][5128] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.155 [INFO][5128] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.155 [INFO][5128] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-856cba2a05' Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.162 [INFO][5128] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.166 [INFO][5128] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.170 [INFO][5128] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.171 [INFO][5128] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.173 [INFO][5128] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.173 [INFO][5128] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.174 [INFO][5128] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02 Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.178 [INFO][5128] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.186 [INFO][5128] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.134/26] block=192.168.12.128/26 handle="k8s-pod-network.118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.186 [INFO][5128] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.134/26] handle="k8s-pod-network.118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.186 [INFO][5128] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:49.208452 containerd[1892]: 2025-11-23 23:23:49.186 [INFO][5128] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.134/26] IPv6=[] ContainerID="118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" HandleID="k8s-pod-network.118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" Workload="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-eth0" Nov 23 23:23:49.210045 containerd[1892]: 2025-11-23 23:23:49.188 [INFO][5116] cni-plugin/k8s.go 418: Populated endpoint ContainerID="118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" Namespace="kube-system" Pod="coredns-674b8bbfcf-87pfq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"", Pod:"coredns-674b8bbfcf-87pfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali626b0cf6354", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:49.210045 containerd[1892]: 2025-11-23 23:23:49.188 [INFO][5116] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.134/32] ContainerID="118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" Namespace="kube-system" Pod="coredns-674b8bbfcf-87pfq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-eth0" Nov 23 23:23:49.210045 containerd[1892]: 2025-11-23 23:23:49.188 [INFO][5116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali626b0cf6354 ContainerID="118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" Namespace="kube-system" Pod="coredns-674b8bbfcf-87pfq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-eth0" Nov 23 23:23:49.210045 containerd[1892]: 2025-11-23 23:23:49.192 [INFO][5116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" Namespace="kube-system" Pod="coredns-674b8bbfcf-87pfq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-eth0" Nov 23 23:23:49.210045 containerd[1892]: 2025-11-23 23:23:49.192 [INFO][5116] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" Namespace="kube-system" Pod="coredns-674b8bbfcf-87pfq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02", Pod:"coredns-674b8bbfcf-87pfq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali626b0cf6354", MAC:"b6:68:b9:92:a7:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:49.210045 containerd[1892]: 2025-11-23 23:23:49.206 [INFO][5116] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" Namespace="kube-system" Pod="coredns-674b8bbfcf-87pfq" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-coredns--674b8bbfcf--87pfq-eth0" Nov 23 23:23:49.215109 kubelet[3455]: E1123 23:23:49.215077 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f" Nov 23 23:23:49.219224 kubelet[3455]: E1123 23:23:49.219080 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:23:49.219569 kubelet[3455]: E1123 23:23:49.219034 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:23:49.278093 containerd[1892]: time="2025-11-23T23:23:49.277853171Z" level=info msg="connecting to shim 118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02" address="unix:///run/containerd/s/1fe8b43edc0d80360c4429600e26dfbfc615af1fbea9d5e2be7e080b8427869d" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:49.281677 kubelet[3455]: I1123 23:23:49.281432 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dhkjl" podStartSLOduration=36.281419272 podStartE2EDuration="36.281419272s" podCreationTimestamp="2025-11-23 23:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:23:49.259137483 +0000 UTC m=+41.256177600" watchObservedRunningTime="2025-11-23 23:23:49.281419272 +0000 UTC m=+41.278459389" Nov 23 23:23:49.305491 systemd[1]: Started cri-containerd-118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02.scope - libcontainer container 118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02. Nov 23 23:23:49.334737 containerd[1892]: time="2025-11-23T23:23:49.334709507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-87pfq,Uid:cd7fa0b5-bc3c-4086-bd6e-a9ce06dcf4b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02\"" Nov 23 23:23:49.342715 containerd[1892]: time="2025-11-23T23:23:49.342669844Z" level=info msg="CreateContainer within sandbox \"118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:23:49.360200 containerd[1892]: time="2025-11-23T23:23:49.359894936Z" level=info msg="Container 16a8c8fa9ed3f75f58e745878ec08c54608245cd53b422a2759c95cbd16502d4: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:49.371698 containerd[1892]: time="2025-11-23T23:23:49.371671957Z" level=info msg="CreateContainer within sandbox \"118bbca2501b1b678d6fac259229e7991b59f072c9ee5c31632d47e2b3395a02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"16a8c8fa9ed3f75f58e745878ec08c54608245cd53b422a2759c95cbd16502d4\"" Nov 23 23:23:49.372850 containerd[1892]: time="2025-11-23T23:23:49.372818672Z" level=info msg="StartContainer for \"16a8c8fa9ed3f75f58e745878ec08c54608245cd53b422a2759c95cbd16502d4\"" Nov 23 23:23:49.373473 containerd[1892]: time="2025-11-23T23:23:49.373450099Z" level=info msg="connecting to shim 16a8c8fa9ed3f75f58e745878ec08c54608245cd53b422a2759c95cbd16502d4" address="unix:///run/containerd/s/1fe8b43edc0d80360c4429600e26dfbfc615af1fbea9d5e2be7e080b8427869d" protocol=ttrpc version=3 Nov 23 23:23:49.389352 systemd[1]: Started cri-containerd-16a8c8fa9ed3f75f58e745878ec08c54608245cd53b422a2759c95cbd16502d4.scope - libcontainer container 16a8c8fa9ed3f75f58e745878ec08c54608245cd53b422a2759c95cbd16502d4. Nov 23 23:23:49.416851 containerd[1892]: time="2025-11-23T23:23:49.416812001Z" level=info msg="StartContainer for \"16a8c8fa9ed3f75f58e745878ec08c54608245cd53b422a2759c95cbd16502d4\" returns successfully" Nov 23 23:23:50.096929 containerd[1892]: time="2025-11-23T23:23:50.096618715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4f684bcb-cgczn,Uid:7afbc6db-85a8-4a42-8680-e685d44be238,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:23:50.106781 systemd-networkd[1465]: cali670b927c1db: Gained IPv6LL Nov 23 23:23:50.170471 systemd-networkd[1465]: caliaeaacfb1f95: Gained IPv6LL Nov 23 23:23:50.205629 systemd-networkd[1465]: cali4c0e3653be6: Link UP Nov 23 23:23:50.205803 systemd-networkd[1465]: cali4c0e3653be6: Gained carrier Nov 23 23:23:50.220755 kubelet[3455]: E1123 23:23:50.220719 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f" Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.140 [INFO][5226] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-eth0 calico-apiserver-5d4f684bcb- calico-apiserver 7afbc6db-85a8-4a42-8680-e685d44be238 815 0 2025-11-23 23:23:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d4f684bcb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-a-856cba2a05 calico-apiserver-5d4f684bcb-cgczn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4c0e3653be6 [] [] }} ContainerID="28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-cgczn" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-" Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.140 [INFO][5226] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-cgczn" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-eth0" Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.165 [INFO][5237] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" HandleID="k8s-pod-network.28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" Workload="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-eth0" Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.165 [INFO][5237] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" HandleID="k8s-pod-network.28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" Workload="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3920), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-a-856cba2a05", "pod":"calico-apiserver-5d4f684bcb-cgczn", "timestamp":"2025-11-23 23:23:50.16565438 +0000 UTC"}, Hostname:"ci-4459.2.1-a-856cba2a05", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.165 [INFO][5237] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.165 [INFO][5237] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.165 [INFO][5237] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-856cba2a05' Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.171 [INFO][5237] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.175 [INFO][5237] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.178 [INFO][5237] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.179 [INFO][5237] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.181 [INFO][5237] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.181 [INFO][5237] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.182 [INFO][5237] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25 Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.190 [INFO][5237] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.199 [INFO][5237] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.135/26] block=192.168.12.128/26 handle="k8s-pod-network.28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.199 [INFO][5237] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.135/26] handle="k8s-pod-network.28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.199 [INFO][5237] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:50.229577 containerd[1892]: 2025-11-23 23:23:50.199 [INFO][5237] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.135/26] IPv6=[] ContainerID="28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" HandleID="k8s-pod-network.28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" Workload="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-eth0" Nov 23 23:23:50.230074 containerd[1892]: 2025-11-23 23:23:50.202 [INFO][5226] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-cgczn" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-eth0", GenerateName:"calico-apiserver-5d4f684bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7afbc6db-85a8-4a42-8680-e685d44be238", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d4f684bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"", Pod:"calico-apiserver-5d4f684bcb-cgczn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c0e3653be6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:50.230074 containerd[1892]: 2025-11-23 23:23:50.202 [INFO][5226] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.135/32] ContainerID="28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-cgczn" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-eth0" Nov 23 23:23:50.230074 containerd[1892]: 2025-11-23 23:23:50.202 [INFO][5226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c0e3653be6 ContainerID="28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-cgczn" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-eth0" Nov 23 23:23:50.230074 containerd[1892]: 2025-11-23 23:23:50.205 [INFO][5226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-cgczn" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-eth0" Nov 23 23:23:50.230074 containerd[1892]: 2025-11-23 23:23:50.206 [INFO][5226] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-cgczn" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-eth0", GenerateName:"calico-apiserver-5d4f684bcb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7afbc6db-85a8-4a42-8680-e685d44be238", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d4f684bcb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25", Pod:"calico-apiserver-5d4f684bcb-cgczn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c0e3653be6", MAC:"ea:5b:43:2e:fe:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:50.230074 containerd[1892]: 2025-11-23 23:23:50.227 [INFO][5226] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" Namespace="calico-apiserver" Pod="calico-apiserver-5d4f684bcb-cgczn" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-calico--apiserver--5d4f684bcb--cgczn-eth0" Nov 23 23:23:50.261598 kubelet[3455]: I1123 23:23:50.258665 3455 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-87pfq" podStartSLOduration=37.258650574 podStartE2EDuration="37.258650574s" podCreationTimestamp="2025-11-23 23:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:23:50.257605478 +0000 UTC m=+42.254645595" watchObservedRunningTime="2025-11-23 23:23:50.258650574 +0000 UTC m=+42.255690699" Nov 23 23:23:50.270271 containerd[1892]: time="2025-11-23T23:23:50.270214677Z" level=info msg="connecting to shim 28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25" address="unix:///run/containerd/s/6c7aa3c309a9d4463f72b8b67ef5e8d8e5e096efa8b2f7bb8dbdb6c6eed05271" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:50.295371 systemd[1]: Started cri-containerd-28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25.scope - libcontainer container 28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25. Nov 23 23:23:50.329753 containerd[1892]: time="2025-11-23T23:23:50.329728181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d4f684bcb-cgczn,Uid:7afbc6db-85a8-4a42-8680-e685d44be238,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"28499df4cd0efe5f105be1d329bdb2f13b0695837f990327789df45caad99a25\"" Nov 23 23:23:50.331002 containerd[1892]: time="2025-11-23T23:23:50.330978659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:23:50.565951 containerd[1892]: time="2025-11-23T23:23:50.565895691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:50.568532 containerd[1892]: time="2025-11-23T23:23:50.568489377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:23:50.568598 containerd[1892]: time="2025-11-23T23:23:50.568568388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:23:50.568745 kubelet[3455]: E1123 23:23:50.568707 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:50.568796 kubelet[3455]: E1123 23:23:50.568753 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:50.569178 kubelet[3455]: E1123 23:23:50.568882 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrsrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d4f684bcb-cgczn_calico-apiserver(7afbc6db-85a8-4a42-8680-e685d44be238): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:50.570450 kubelet[3455]: E1123 23:23:50.570407 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" podUID="7afbc6db-85a8-4a42-8680-e685d44be238" Nov 23 23:23:51.093815 containerd[1892]: time="2025-11-23T23:23:51.093755718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hdfkh,Uid:be1e52dc-0aab-46fe-876a-12be408713eb,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:51.130427 systemd-networkd[1465]: cali626b0cf6354: Gained IPv6LL Nov 23 23:23:51.185521 systemd-networkd[1465]: cali93c1aa4b862: Link UP Nov 23 23:23:51.186928 systemd-networkd[1465]: cali93c1aa4b862: Gained carrier Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.122 [INFO][5301] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-eth0 goldmane-666569f655- calico-system be1e52dc-0aab-46fe-876a-12be408713eb 813 0 2025-11-23 23:23:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.1-a-856cba2a05 goldmane-666569f655-hdfkh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali93c1aa4b862 [] [] }} ContainerID="0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" Namespace="calico-system" Pod="goldmane-666569f655-hdfkh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-" Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.122 [INFO][5301] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" Namespace="calico-system" Pod="goldmane-666569f655-hdfkh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-eth0" Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.145 [INFO][5312] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" HandleID="k8s-pod-network.0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" Workload="ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-eth0" Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.145 [INFO][5312] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" HandleID="k8s-pod-network.0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" Workload="ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d38f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-856cba2a05", "pod":"goldmane-666569f655-hdfkh", "timestamp":"2025-11-23 23:23:51.145624422 +0000 UTC"}, Hostname:"ci-4459.2.1-a-856cba2a05", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.145 [INFO][5312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.145 [INFO][5312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.145 [INFO][5312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-856cba2a05' Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.150 [INFO][5312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.154 [INFO][5312] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.157 [INFO][5312] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.159 [INFO][5312] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.165 [INFO][5312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.165 [INFO][5312] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.167 [INFO][5312] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81 Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.171 [INFO][5312] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.179 [INFO][5312] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.136/26] block=192.168.12.128/26 handle="k8s-pod-network.0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.179 [INFO][5312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.136/26] handle="k8s-pod-network.0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" host="ci-4459.2.1-a-856cba2a05" Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.179 [INFO][5312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:51.202489 containerd[1892]: 2025-11-23 23:23:51.179 [INFO][5312] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.136/26] IPv6=[] ContainerID="0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" HandleID="k8s-pod-network.0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" Workload="ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-eth0" Nov 23 23:23:51.203018 containerd[1892]: 2025-11-23 23:23:51.182 [INFO][5301] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" Namespace="calico-system" Pod="goldmane-666569f655-hdfkh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"be1e52dc-0aab-46fe-876a-12be408713eb", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"", Pod:"goldmane-666569f655-hdfkh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali93c1aa4b862", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:51.203018 containerd[1892]: 2025-11-23 23:23:51.182 [INFO][5301] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.136/32] ContainerID="0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" Namespace="calico-system" Pod="goldmane-666569f655-hdfkh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-eth0" Nov 23 23:23:51.203018 containerd[1892]: 2025-11-23 23:23:51.182 [INFO][5301] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93c1aa4b862 ContainerID="0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" Namespace="calico-system" Pod="goldmane-666569f655-hdfkh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-eth0" Nov 23 23:23:51.203018 containerd[1892]: 2025-11-23 23:23:51.187 [INFO][5301] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" Namespace="calico-system" Pod="goldmane-666569f655-hdfkh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-eth0" Nov 23 23:23:51.203018 containerd[1892]: 2025-11-23 23:23:51.187 [INFO][5301] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" Namespace="calico-system" Pod="goldmane-666569f655-hdfkh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"be1e52dc-0aab-46fe-876a-12be408713eb", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-856cba2a05", ContainerID:"0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81", Pod:"goldmane-666569f655-hdfkh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali93c1aa4b862", MAC:"c2:39:5d:5e:b0:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:51.203018 containerd[1892]: 2025-11-23 23:23:51.200 [INFO][5301] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" Namespace="calico-system" Pod="goldmane-666569f655-hdfkh" WorkloadEndpoint="ci--4459.2.1--a--856cba2a05-k8s-goldmane--666569f655--hdfkh-eth0" Nov 23 23:23:51.223424 kubelet[3455]: E1123 23:23:51.223376 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" podUID="7afbc6db-85a8-4a42-8680-e685d44be238" Nov 23 23:23:51.249962 containerd[1892]: time="2025-11-23T23:23:51.249886885Z" level=info msg="connecting to shim 0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81" address="unix:///run/containerd/s/56f7d819e51edc361f55b9e03ecf2d2f9da6dc9c77547fa64f9ee99882999ada" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:51.280375 systemd[1]: Started cri-containerd-0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81.scope - libcontainer container 0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81. Nov 23 23:23:51.317266 containerd[1892]: time="2025-11-23T23:23:51.317221114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-hdfkh,Uid:be1e52dc-0aab-46fe-876a-12be408713eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a85c7e585a52698cc43f4a2ebd339f939df8b31424c5c48112bd70a1f196d81\"" Nov 23 23:23:51.318643 containerd[1892]: time="2025-11-23T23:23:51.318596372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:23:51.572784 containerd[1892]: time="2025-11-23T23:23:51.572728532Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:51.575724 containerd[1892]: time="2025-11-23T23:23:51.575694446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:23:51.575805 containerd[1892]: time="2025-11-23T23:23:51.575767976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:23:51.575955 kubelet[3455]: E1123 23:23:51.575918 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:23:51.576003 kubelet[3455]: E1123 23:23:51.575964 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:23:51.576123 kubelet[3455]: E1123 23:23:51.576065 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kf8jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hdfkh_calico-system(be1e52dc-0aab-46fe-876a-12be408713eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:51.577434 kubelet[3455]: E1123 23:23:51.577401 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hdfkh" podUID="be1e52dc-0aab-46fe-876a-12be408713eb" Nov 23 23:23:51.578460 systemd-networkd[1465]: cali4c0e3653be6: Gained IPv6LL Nov 23 23:23:52.225765 kubelet[3455]: E1123 23:23:52.225280 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hdfkh" podUID="be1e52dc-0aab-46fe-876a-12be408713eb" Nov 23 23:23:52.226630 kubelet[3455]: E1123 23:23:52.225827 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" podUID="7afbc6db-85a8-4a42-8680-e685d44be238" Nov 23 23:23:52.410472 systemd-networkd[1465]: cali93c1aa4b862: Gained IPv6LL Nov 23 23:23:53.226739 kubelet[3455]: E1123 23:23:53.226703 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hdfkh" podUID="be1e52dc-0aab-46fe-876a-12be408713eb" Nov 23 23:23:57.095259 containerd[1892]: time="2025-11-23T23:23:57.095215699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:23:57.356456 containerd[1892]: time="2025-11-23T23:23:57.356332607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:57.358789 containerd[1892]: time="2025-11-23T23:23:57.358746613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:23:57.358876 containerd[1892]: time="2025-11-23T23:23:57.358830656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:23:57.359050 kubelet[3455]: E1123 23:23:57.358975 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:23:57.359050 kubelet[3455]: E1123 23:23:57.359038 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:23:57.359527 kubelet[3455]: E1123 23:23:57.359302 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c6e586d421c470cad8a5776d76af0cb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fk6ms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84cffccc6c-swdgq_calico-system(f26811a6-5336-4c1a-bfb3-9c8fe093c60c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:57.361999 containerd[1892]: time="2025-11-23T23:23:57.361978246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:23:57.590135 containerd[1892]: time="2025-11-23T23:23:57.590054596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:57.593486 containerd[1892]: time="2025-11-23T23:23:57.593450114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:23:57.593592 containerd[1892]: time="2025-11-23T23:23:57.593456810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:23:57.593670 kubelet[3455]: E1123 23:23:57.593632 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:23:57.593747 kubelet[3455]: E1123 23:23:57.593676 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:23:57.594657 kubelet[3455]: E1123 23:23:57.593792 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk6ms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84cffccc6c-swdgq_calico-system(f26811a6-5336-4c1a-bfb3-9c8fe093c60c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:57.595394 kubelet[3455]: E1123 23:23:57.595366 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cffccc6c-swdgq" podUID="f26811a6-5336-4c1a-bfb3-9c8fe093c60c" Nov 23 23:24:00.094749 containerd[1892]: time="2025-11-23T23:24:00.094410245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:24:00.331454 containerd[1892]: time="2025-11-23T23:24:00.331356122Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:00.334257 containerd[1892]: time="2025-11-23T23:24:00.334162949Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:24:00.334257 containerd[1892]: time="2025-11-23T23:24:00.334225015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:00.334397 kubelet[3455]: E1123 23:24:00.334355 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:00.335004 kubelet[3455]: E1123 23:24:00.334399 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:00.335004 kubelet[3455]: E1123 23:24:00.334580 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlrtt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d4f684bcb-7dv89_calico-apiserver(f59281aa-b935-4d43-8373-69a621420431): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:00.335100 containerd[1892]: time="2025-11-23T23:24:00.334613419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:24:00.336414 kubelet[3455]: E1123 23:24:00.336378 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:24:00.572770 containerd[1892]: time="2025-11-23T23:24:00.572726735Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:00.575335 containerd[1892]: time="2025-11-23T23:24:00.575299938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:24:00.575388 containerd[1892]: time="2025-11-23T23:24:00.575378429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:24:00.575539 kubelet[3455]: E1123 23:24:00.575500 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:24:00.575592 kubelet[3455]: E1123 23:24:00.575550 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:24:00.576326 kubelet[3455]: E1123 23:24:00.575690 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptkmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzswh_calico-system(dc4e36a9-c245-455d-ada2-16c405b7bde8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:00.578434 containerd[1892]: time="2025-11-23T23:24:00.578410255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:24:00.893230 containerd[1892]: time="2025-11-23T23:24:00.893115204Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:00.896237 containerd[1892]: time="2025-11-23T23:24:00.896196032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:24:00.896323 containerd[1892]: time="2025-11-23T23:24:00.896307092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:24:00.896686 kubelet[3455]: E1123 23:24:00.896452 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:24:00.896686 kubelet[3455]: E1123 23:24:00.896502 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:24:00.896686 kubelet[3455]: E1123 23:24:00.896618 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptkmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzswh_calico-system(dc4e36a9-c245-455d-ada2-16c405b7bde8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:00.897984 kubelet[3455]: E1123 23:24:00.897950 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:24:03.095338 containerd[1892]: time="2025-11-23T23:24:03.095088296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:24:03.359102 containerd[1892]: time="2025-11-23T23:24:03.358930044Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:03.362435 containerd[1892]: time="2025-11-23T23:24:03.362346643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:24:03.362435 containerd[1892]: time="2025-11-23T23:24:03.362403821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:03.362678 kubelet[3455]: E1123 23:24:03.362640 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:03.363050 kubelet[3455]: E1123 23:24:03.362682 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:03.363050 kubelet[3455]: E1123 23:24:03.362798 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrsrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d4f684bcb-cgczn_calico-apiserver(7afbc6db-85a8-4a42-8680-e685d44be238): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:03.363953 kubelet[3455]: E1123 23:24:03.363914 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" podUID="7afbc6db-85a8-4a42-8680-e685d44be238" Nov 23 23:24:04.095561 containerd[1892]: time="2025-11-23T23:24:04.095443274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:24:04.493104 containerd[1892]: time="2025-11-23T23:24:04.492944902Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:04.496270 containerd[1892]: time="2025-11-23T23:24:04.496169180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:24:04.496270 containerd[1892]: time="2025-11-23T23:24:04.496231926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:24:04.496411 kubelet[3455]: E1123 23:24:04.496374 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:24:04.497496 kubelet[3455]: E1123 23:24:04.496420 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:24:04.497496 kubelet[3455]: E1123 23:24:04.496537 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56586bc88d-xbb5n_calico-system(15476924-84af-4e25-8a82-221156412f5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:04.497696 kubelet[3455]: E1123 23:24:04.497662 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f" Nov 23 23:24:07.095514 containerd[1892]: time="2025-11-23T23:24:07.094983091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:24:07.368785 containerd[1892]: time="2025-11-23T23:24:07.368660156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:07.371527 containerd[1892]: time="2025-11-23T23:24:07.371490134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:24:07.371581 containerd[1892]: time="2025-11-23T23:24:07.371567888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:07.371745 kubelet[3455]: E1123 23:24:07.371701 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:24:07.372101 kubelet[3455]: E1123 23:24:07.371841 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:24:07.372488 kubelet[3455]: E1123 23:24:07.372362 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kf8jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hdfkh_calico-system(be1e52dc-0aab-46fe-876a-12be408713eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:07.374321 kubelet[3455]: E1123 23:24:07.374282 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hdfkh" podUID="be1e52dc-0aab-46fe-876a-12be408713eb" Nov 23 23:24:11.094960 kubelet[3455]: E1123 23:24:11.094774 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:24:11.095835 kubelet[3455]: E1123 23:24:11.095321 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cffccc6c-swdgq" podUID="f26811a6-5336-4c1a-bfb3-9c8fe093c60c" Nov 23 23:24:13.094977 kubelet[3455]: E1123 23:24:13.094797 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:24:18.095407 kubelet[3455]: E1123 23:24:18.095353 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" podUID="7afbc6db-85a8-4a42-8680-e685d44be238" Nov 23 23:24:19.095879 kubelet[3455]: E1123 23:24:19.095733 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f" Nov 23 23:24:22.094647 kubelet[3455]: E1123 23:24:22.094597 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hdfkh" podUID="be1e52dc-0aab-46fe-876a-12be408713eb" Nov 23 23:24:22.095609 containerd[1892]: time="2025-11-23T23:24:22.095573170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:24:22.352275 containerd[1892]: time="2025-11-23T23:24:22.351055590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:22.354075 containerd[1892]: time="2025-11-23T23:24:22.354004395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:24:22.354075 containerd[1892]: time="2025-11-23T23:24:22.354041396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:24:22.354354 kubelet[3455]: E1123 23:24:22.354167 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:24:22.354354 kubelet[3455]: E1123 23:24:22.354209 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:24:22.354354 kubelet[3455]: E1123 23:24:22.354320 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c6e586d421c470cad8a5776d76af0cb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fk6ms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84cffccc6c-swdgq_calico-system(f26811a6-5336-4c1a-bfb3-9c8fe093c60c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:22.357491 containerd[1892]: time="2025-11-23T23:24:22.357278842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:24:22.607078 containerd[1892]: time="2025-11-23T23:24:22.606954447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:22.611736 containerd[1892]: time="2025-11-23T23:24:22.611635202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:24:22.611736 containerd[1892]: time="2025-11-23T23:24:22.611714581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:24:22.612000 kubelet[3455]: E1123 23:24:22.611946 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:24:22.612054 kubelet[3455]: E1123 23:24:22.612011 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:24:22.612159 kubelet[3455]: E1123 23:24:22.612116 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk6ms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84cffccc6c-swdgq_calico-system(f26811a6-5336-4c1a-bfb3-9c8fe093c60c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:22.613438 kubelet[3455]: E1123 23:24:22.613401 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cffccc6c-swdgq" podUID="f26811a6-5336-4c1a-bfb3-9c8fe093c60c" Nov 23 23:24:23.095149 containerd[1892]: time="2025-11-23T23:24:23.095110071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:24:23.352688 containerd[1892]: time="2025-11-23T23:24:23.352423812Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:23.355039 containerd[1892]: time="2025-11-23T23:24:23.354860241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:24:23.355039 containerd[1892]: time="2025-11-23T23:24:23.354938444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:23.356262 kubelet[3455]: E1123 23:24:23.355547 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:23.356262 kubelet[3455]: E1123 23:24:23.355588 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:23.356262 kubelet[3455]: E1123 23:24:23.355692 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlrtt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d4f684bcb-7dv89_calico-apiserver(f59281aa-b935-4d43-8373-69a621420431): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:23.357072 kubelet[3455]: E1123 23:24:23.357046 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:24:26.095468 containerd[1892]: time="2025-11-23T23:24:26.095383568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:24:26.446461 containerd[1892]: time="2025-11-23T23:24:26.446210916Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:26.449007 containerd[1892]: time="2025-11-23T23:24:26.448903185Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:24:26.449007 containerd[1892]: time="2025-11-23T23:24:26.448981548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:24:26.449166 kubelet[3455]: E1123 23:24:26.449127 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:24:26.449456 kubelet[3455]: E1123 23:24:26.449176 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:24:26.449497 kubelet[3455]: E1123 23:24:26.449303 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptkmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzswh_calico-system(dc4e36a9-c245-455d-ada2-16c405b7bde8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:26.451311 containerd[1892]: time="2025-11-23T23:24:26.451295773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:24:26.694958 containerd[1892]: time="2025-11-23T23:24:26.694666130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:26.697413 containerd[1892]: time="2025-11-23T23:24:26.697256396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:24:26.697413 containerd[1892]: time="2025-11-23T23:24:26.697333718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:24:26.697916 kubelet[3455]: E1123 23:24:26.697765 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:24:26.697972 kubelet[3455]: E1123 23:24:26.697926 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:24:26.699267 kubelet[3455]: E1123 23:24:26.698148 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptkmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzswh_calico-system(dc4e36a9-c245-455d-ada2-16c405b7bde8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:26.703772 kubelet[3455]: E1123 23:24:26.699300 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:24:29.095517 containerd[1892]: time="2025-11-23T23:24:29.095274926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:24:29.385098 containerd[1892]: time="2025-11-23T23:24:29.384855082Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:29.387255 containerd[1892]: time="2025-11-23T23:24:29.387184826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:24:29.387319 containerd[1892]: time="2025-11-23T23:24:29.387283949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:29.387560 kubelet[3455]: E1123 23:24:29.387513 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:29.387875 kubelet[3455]: E1123 23:24:29.387564 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:29.387875 kubelet[3455]: E1123 23:24:29.387673 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrsrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d4f684bcb-cgczn_calico-apiserver(7afbc6db-85a8-4a42-8680-e685d44be238): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:29.388929 kubelet[3455]: E1123 23:24:29.388837 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" podUID="7afbc6db-85a8-4a42-8680-e685d44be238" Nov 23 23:24:30.095780 containerd[1892]: time="2025-11-23T23:24:30.095338319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:24:30.478037 containerd[1892]: time="2025-11-23T23:24:30.477909863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:30.480742 containerd[1892]: time="2025-11-23T23:24:30.480699693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:24:30.480811 containerd[1892]: time="2025-11-23T23:24:30.480785944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:24:30.480992 kubelet[3455]: E1123 23:24:30.480948 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:24:30.481310 kubelet[3455]: E1123 23:24:30.480999 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:24:30.481310 kubelet[3455]: E1123 23:24:30.481111 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56586bc88d-xbb5n_calico-system(15476924-84af-4e25-8a82-221156412f5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:30.482405 kubelet[3455]: E1123 23:24:30.482288 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f" Nov 23 23:24:35.095130 kubelet[3455]: E1123 23:24:35.095079 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:24:36.098048 containerd[1892]: time="2025-11-23T23:24:36.097997231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:24:36.098574 kubelet[3455]: E1123 23:24:36.098544 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cffccc6c-swdgq" podUID="f26811a6-5336-4c1a-bfb3-9c8fe093c60c" Nov 23 23:24:36.364463 containerd[1892]: time="2025-11-23T23:24:36.364219835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:36.366829 containerd[1892]: time="2025-11-23T23:24:36.366771451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:24:36.366988 containerd[1892]: time="2025-11-23T23:24:36.366810741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:36.367121 kubelet[3455]: E1123 23:24:36.367068 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:24:36.367121 kubelet[3455]: E1123 23:24:36.367117 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:24:36.367275 kubelet[3455]: E1123 23:24:36.367221 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kf8jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hdfkh_calico-system(be1e52dc-0aab-46fe-876a-12be408713eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:36.368493 kubelet[3455]: E1123 23:24:36.368446 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hdfkh" podUID="be1e52dc-0aab-46fe-876a-12be408713eb" Nov 23 23:24:40.095075 kubelet[3455]: E1123 23:24:40.095027 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:24:42.096168 kubelet[3455]: E1123 23:24:42.096126 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" podUID="7afbc6db-85a8-4a42-8680-e685d44be238" Nov 23 23:24:44.094623 kubelet[3455]: E1123 23:24:44.094579 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f" Nov 23 23:24:48.095630 kubelet[3455]: E1123 23:24:48.095569 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:24:48.097168 kubelet[3455]: E1123 23:24:48.095875 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hdfkh" podUID="be1e52dc-0aab-46fe-876a-12be408713eb" Nov 23 23:24:49.096287 kubelet[3455]: E1123 23:24:49.095943 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cffccc6c-swdgq" podUID="f26811a6-5336-4c1a-bfb3-9c8fe093c60c" Nov 23 23:24:49.974387 systemd[1]: Started sshd@7-10.200.20.43:22-10.200.16.10:49072.service - OpenSSH per-connection server daemon (10.200.16.10:49072). Nov 23 23:24:50.403126 sshd[5501]: Accepted publickey for core from 10.200.16.10 port 49072 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:24:50.406082 sshd-session[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:24:50.413058 systemd-logind[1871]: New session 10 of user core. Nov 23 23:24:50.417369 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 23:24:50.901934 sshd[5504]: Connection closed by 10.200.16.10 port 49072 Nov 23 23:24:50.904931 sshd-session[5501]: pam_unix(sshd:session): session closed for user core Nov 23 23:24:50.910810 systemd[1]: sshd@7-10.200.20.43:22-10.200.16.10:49072.service: Deactivated successfully. Nov 23 23:24:50.915904 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 23:24:50.917439 systemd-logind[1871]: Session 10 logged out. Waiting for processes to exit. Nov 23 23:24:50.920471 systemd-logind[1871]: Removed session 10. Nov 23 23:24:53.097001 kubelet[3455]: E1123 23:24:53.096953 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:24:55.095317 kubelet[3455]: E1123 23:24:55.095131 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" podUID="7afbc6db-85a8-4a42-8680-e685d44be238" Nov 23 23:24:56.001443 systemd[1]: Started sshd@8-10.200.20.43:22-10.200.16.10:49086.service - OpenSSH per-connection server daemon (10.200.16.10:49086). Nov 23 23:24:56.097071 kubelet[3455]: E1123 23:24:56.096223 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f" Nov 23 23:24:56.490823 sshd[5517]: Accepted publickey for core from 10.200.16.10 port 49086 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:24:56.492757 sshd-session[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:24:56.498312 systemd-logind[1871]: New session 11 of user core. Nov 23 23:24:56.503505 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 23:24:56.904284 sshd[5520]: Connection closed by 10.200.16.10 port 49086 Nov 23 23:24:56.902071 sshd-session[5517]: pam_unix(sshd:session): session closed for user core Nov 23 23:24:56.906978 systemd[1]: sshd@8-10.200.20.43:22-10.200.16.10:49086.service: Deactivated successfully. Nov 23 23:24:56.910711 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 23:24:56.911835 systemd-logind[1871]: Session 11 logged out. Waiting for processes to exit. Nov 23 23:24:56.913235 systemd-logind[1871]: Removed session 11. Nov 23 23:24:59.095408 kubelet[3455]: E1123 23:24:59.095334 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hdfkh" podUID="be1e52dc-0aab-46fe-876a-12be408713eb" Nov 23 23:25:01.094934 kubelet[3455]: E1123 23:25:01.094890 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:25:01.998134 systemd[1]: Started sshd@9-10.200.20.43:22-10.200.16.10:54980.service - OpenSSH per-connection server daemon (10.200.16.10:54980). Nov 23 23:25:02.453976 sshd[5534]: Accepted publickey for core from 10.200.16.10 port 54980 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:02.455151 sshd-session[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:02.459501 systemd-logind[1871]: New session 12 of user core. Nov 23 23:25:02.464355 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 23:25:02.853129 sshd[5537]: Connection closed by 10.200.16.10 port 54980 Nov 23 23:25:02.868289 sshd-session[5534]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:02.871704 systemd[1]: sshd@9-10.200.20.43:22-10.200.16.10:54980.service: Deactivated successfully. Nov 23 23:25:02.873158 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 23:25:02.873792 systemd-logind[1871]: Session 12 logged out. Waiting for processes to exit. Nov 23 23:25:02.874844 systemd-logind[1871]: Removed session 12. Nov 23 23:25:02.934803 systemd[1]: Started sshd@10-10.200.20.43:22-10.200.16.10:54990.service - OpenSSH per-connection server daemon (10.200.16.10:54990). Nov 23 23:25:03.408316 sshd[5555]: Accepted publickey for core from 10.200.16.10 port 54990 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:03.409394 sshd-session[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:03.413989 systemd-logind[1871]: New session 13 of user core. Nov 23 23:25:03.420366 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 23:25:03.805272 sshd[5560]: Connection closed by 10.200.16.10 port 54990 Nov 23 23:25:03.805386 sshd-session[5555]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:03.809834 systemd-logind[1871]: Session 13 logged out. Waiting for processes to exit. Nov 23 23:25:03.810475 systemd[1]: sshd@10-10.200.20.43:22-10.200.16.10:54990.service: Deactivated successfully. Nov 23 23:25:03.812226 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 23:25:03.814625 systemd-logind[1871]: Removed session 13. Nov 23 23:25:03.887829 systemd[1]: Started sshd@11-10.200.20.43:22-10.200.16.10:54994.service - OpenSSH per-connection server daemon (10.200.16.10:54994). Nov 23 23:25:04.096018 containerd[1892]: time="2025-11-23T23:25:04.095496922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:25:04.316993 sshd[5570]: Accepted publickey for core from 10.200.16.10 port 54994 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:04.318110 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:04.321689 systemd-logind[1871]: New session 14 of user core. Nov 23 23:25:04.326514 containerd[1892]: time="2025-11-23T23:25:04.326478156Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:25:04.327484 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 23:25:04.329453 containerd[1892]: time="2025-11-23T23:25:04.329410863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:25:04.330137 containerd[1892]: time="2025-11-23T23:25:04.329498378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:25:04.330185 kubelet[3455]: E1123 23:25:04.329607 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:25:04.330185 kubelet[3455]: E1123 23:25:04.329656 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:25:04.332636 kubelet[3455]: E1123 23:25:04.332585 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1c6e586d421c470cad8a5776d76af0cb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fk6ms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84cffccc6c-swdgq_calico-system(f26811a6-5336-4c1a-bfb3-9c8fe093c60c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:25:04.335942 containerd[1892]: time="2025-11-23T23:25:04.335921217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:25:04.588396 containerd[1892]: time="2025-11-23T23:25:04.588351316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:25:04.590755 containerd[1892]: time="2025-11-23T23:25:04.590712765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:25:04.590822 containerd[1892]: time="2025-11-23T23:25:04.590794071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:25:04.591295 kubelet[3455]: E1123 23:25:04.590941 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:25:04.591295 kubelet[3455]: E1123 23:25:04.590996 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:25:04.591295 kubelet[3455]: E1123 23:25:04.591113 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fk6ms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84cffccc6c-swdgq_calico-system(f26811a6-5336-4c1a-bfb3-9c8fe093c60c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:25:04.592414 kubelet[3455]: E1123 23:25:04.592372 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cffccc6c-swdgq" podUID="f26811a6-5336-4c1a-bfb3-9c8fe093c60c" Nov 23 23:25:04.687096 sshd[5573]: Connection closed by 10.200.16.10 port 54994 Nov 23 23:25:04.686474 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:04.689741 systemd-logind[1871]: Session 14 logged out. Waiting for processes to exit. Nov 23 23:25:04.690584 systemd[1]: sshd@11-10.200.20.43:22-10.200.16.10:54994.service: Deactivated successfully. Nov 23 23:25:04.692577 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 23:25:04.694101 systemd-logind[1871]: Removed session 14. Nov 23 23:25:06.096179 kubelet[3455]: E1123 23:25:06.096049 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" podUID="7afbc6db-85a8-4a42-8680-e685d44be238" Nov 23 23:25:07.094348 containerd[1892]: time="2025-11-23T23:25:07.094159642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:25:07.095039 kubelet[3455]: E1123 23:25:07.094290 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f" Nov 23 23:25:07.348536 containerd[1892]: time="2025-11-23T23:25:07.348125437Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:25:07.350862 containerd[1892]: time="2025-11-23T23:25:07.350824625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:25:07.350862 containerd[1892]: time="2025-11-23T23:25:07.350881826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:25:07.351421 kubelet[3455]: E1123 23:25:07.351372 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:25:07.351717 kubelet[3455]: E1123 23:25:07.351431 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:25:07.351717 kubelet[3455]: E1123 23:25:07.351534 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptkmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzswh_calico-system(dc4e36a9-c245-455d-ada2-16c405b7bde8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:25:07.353653 containerd[1892]: time="2025-11-23T23:25:07.353632896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:25:07.647425 containerd[1892]: time="2025-11-23T23:25:07.647010719Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:25:07.649636 containerd[1892]: time="2025-11-23T23:25:07.649534317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:25:07.649636 containerd[1892]: time="2025-11-23T23:25:07.649612039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:25:07.650073 kubelet[3455]: E1123 23:25:07.649885 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:25:07.650073 kubelet[3455]: E1123 23:25:07.649930 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:25:07.650073 kubelet[3455]: E1123 23:25:07.650036 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ptkmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-mzswh_calico-system(dc4e36a9-c245-455d-ada2-16c405b7bde8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:25:07.651447 kubelet[3455]: E1123 23:25:07.651298 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:25:09.770387 systemd[1]: Started sshd@12-10.200.20.43:22-10.200.16.10:55002.service - OpenSSH per-connection server daemon (10.200.16.10:55002). Nov 23 23:25:10.096138 kubelet[3455]: E1123 23:25:10.095636 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hdfkh" podUID="be1e52dc-0aab-46fe-876a-12be408713eb" Nov 23 23:25:10.192182 sshd[5591]: Accepted publickey for core from 10.200.16.10 port 55002 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:10.193580 sshd-session[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:10.197303 systemd-logind[1871]: New session 15 of user core. Nov 23 23:25:10.201380 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 23:25:10.543865 sshd[5594]: Connection closed by 10.200.16.10 port 55002 Nov 23 23:25:10.544390 sshd-session[5591]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:10.547278 systemd[1]: sshd@12-10.200.20.43:22-10.200.16.10:55002.service: Deactivated successfully. Nov 23 23:25:10.549003 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 23:25:10.550725 systemd-logind[1871]: Session 15 logged out. Waiting for processes to exit. Nov 23 23:25:10.552670 systemd-logind[1871]: Removed session 15. Nov 23 23:25:14.095056 containerd[1892]: time="2025-11-23T23:25:14.095000595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:25:14.364517 containerd[1892]: time="2025-11-23T23:25:14.364346715Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:25:14.367384 containerd[1892]: time="2025-11-23T23:25:14.367277404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:25:14.367384 containerd[1892]: time="2025-11-23T23:25:14.367362470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:25:14.367669 kubelet[3455]: E1123 23:25:14.367621 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:25:14.368063 kubelet[3455]: E1123 23:25:14.367762 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:25:14.368234 kubelet[3455]: E1123 23:25:14.368193 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hlrtt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d4f684bcb-7dv89_calico-apiserver(f59281aa-b935-4d43-8373-69a621420431): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:25:14.369370 kubelet[3455]: E1123 23:25:14.369332 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:25:15.623035 systemd[1]: Started sshd@13-10.200.20.43:22-10.200.16.10:58006.service - OpenSSH per-connection server daemon (10.200.16.10:58006). Nov 23 23:25:16.041254 sshd[5641]: Accepted publickey for core from 10.200.16.10 port 58006 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:16.042037 sshd-session[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:16.050433 systemd-logind[1871]: New session 16 of user core. Nov 23 23:25:16.052421 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 23:25:16.099167 kubelet[3455]: E1123 23:25:16.099050 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cffccc6c-swdgq" podUID="f26811a6-5336-4c1a-bfb3-9c8fe093c60c" Nov 23 23:25:16.409732 sshd[5658]: Connection closed by 10.200.16.10 port 58006 Nov 23 23:25:16.411058 sshd-session[5641]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:16.415056 systemd[1]: sshd@13-10.200.20.43:22-10.200.16.10:58006.service: Deactivated successfully. Nov 23 23:25:16.415107 systemd-logind[1871]: Session 16 logged out. Waiting for processes to exit. Nov 23 23:25:16.416861 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 23:25:16.418196 systemd-logind[1871]: Removed session 16. Nov 23 23:25:21.096255 containerd[1892]: time="2025-11-23T23:25:21.096208535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:25:21.097805 kubelet[3455]: E1123 23:25:21.097766 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:25:21.370354 containerd[1892]: time="2025-11-23T23:25:21.370208678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:25:21.372934 containerd[1892]: time="2025-11-23T23:25:21.372895951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:25:21.373010 containerd[1892]: time="2025-11-23T23:25:21.372973961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:25:21.373147 kubelet[3455]: E1123 23:25:21.373108 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:25:21.373187 kubelet[3455]: E1123 23:25:21.373154 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:25:21.373480 kubelet[3455]: E1123 23:25:21.373432 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kf8jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-hdfkh_calico-system(be1e52dc-0aab-46fe-876a-12be408713eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:25:21.373593 containerd[1892]: time="2025-11-23T23:25:21.373536732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:25:21.374858 kubelet[3455]: E1123 23:25:21.374792 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hdfkh" podUID="be1e52dc-0aab-46fe-876a-12be408713eb" Nov 23 23:25:21.487703 systemd[1]: Started sshd@14-10.200.20.43:22-10.200.16.10:35030.service - OpenSSH per-connection server daemon (10.200.16.10:35030). Nov 23 23:25:21.612474 containerd[1892]: time="2025-11-23T23:25:21.612428993Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:25:21.615025 containerd[1892]: time="2025-11-23T23:25:21.614947990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:25:21.615025 containerd[1892]: time="2025-11-23T23:25:21.614997711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:25:21.615295 kubelet[3455]: E1123 23:25:21.615205 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:25:21.615295 kubelet[3455]: E1123 23:25:21.615268 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:25:21.615543 kubelet[3455]: E1123 23:25:21.615478 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r77bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56586bc88d-xbb5n_calico-system(15476924-84af-4e25-8a82-221156412f5f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:25:21.615769 containerd[1892]: time="2025-11-23T23:25:21.615658947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:25:21.616599 kubelet[3455]: E1123 23:25:21.616563 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f" Nov 23 23:25:21.853260 containerd[1892]: time="2025-11-23T23:25:21.853203326Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:25:21.855757 containerd[1892]: time="2025-11-23T23:25:21.855719482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:25:21.855819 containerd[1892]: time="2025-11-23T23:25:21.855780788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:25:21.856059 kubelet[3455]: E1123 23:25:21.856021 3455 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:25:21.856118 kubelet[3455]: E1123 23:25:21.856070 3455 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:25:21.856207 kubelet[3455]: E1123 23:25:21.856174 3455 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrsrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5d4f684bcb-cgczn_calico-apiserver(7afbc6db-85a8-4a42-8680-e685d44be238): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:25:21.857873 kubelet[3455]: E1123 23:25:21.857592 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" podUID="7afbc6db-85a8-4a42-8680-e685d44be238" Nov 23 23:25:21.909470 sshd[5670]: Accepted publickey for core from 10.200.16.10 port 35030 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:21.910479 sshd-session[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:21.915650 systemd-logind[1871]: New session 17 of user core. Nov 23 23:25:21.921351 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 23:25:22.270558 sshd[5673]: Connection closed by 10.200.16.10 port 35030 Nov 23 23:25:22.270898 sshd-session[5670]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:22.275601 systemd-logind[1871]: Session 17 logged out. Waiting for processes to exit. Nov 23 23:25:22.276281 systemd[1]: sshd@14-10.200.20.43:22-10.200.16.10:35030.service: Deactivated successfully. Nov 23 23:25:22.278174 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 23:25:22.279626 systemd-logind[1871]: Removed session 17. Nov 23 23:25:22.347324 systemd[1]: Started sshd@15-10.200.20.43:22-10.200.16.10:35038.service - OpenSSH per-connection server daemon (10.200.16.10:35038). Nov 23 23:25:22.775269 sshd[5685]: Accepted publickey for core from 10.200.16.10 port 35038 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:22.776420 sshd-session[5685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:22.783859 systemd-logind[1871]: New session 18 of user core. Nov 23 23:25:22.787379 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 23:25:23.237679 sshd[5688]: Connection closed by 10.200.16.10 port 35038 Nov 23 23:25:23.238853 sshd-session[5685]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:23.242290 systemd-logind[1871]: Session 18 logged out. Waiting for processes to exit. Nov 23 23:25:23.242457 systemd[1]: sshd@15-10.200.20.43:22-10.200.16.10:35038.service: Deactivated successfully. Nov 23 23:25:23.243830 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 23:25:23.245329 systemd-logind[1871]: Removed session 18. Nov 23 23:25:23.315782 systemd[1]: Started sshd@16-10.200.20.43:22-10.200.16.10:35044.service - OpenSSH per-connection server daemon (10.200.16.10:35044). Nov 23 23:25:23.748050 sshd[5698]: Accepted publickey for core from 10.200.16.10 port 35044 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:23.749620 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:23.755457 systemd-logind[1871]: New session 19 of user core. Nov 23 23:25:23.760372 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 23:25:24.650629 sshd[5701]: Connection closed by 10.200.16.10 port 35044 Nov 23 23:25:24.650953 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:24.655551 systemd[1]: sshd@16-10.200.20.43:22-10.200.16.10:35044.service: Deactivated successfully. Nov 23 23:25:24.658045 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 23:25:24.659296 systemd-logind[1871]: Session 19 logged out. Waiting for processes to exit. Nov 23 23:25:24.661787 systemd-logind[1871]: Removed session 19. Nov 23 23:25:24.730136 systemd[1]: Started sshd@17-10.200.20.43:22-10.200.16.10:35050.service - OpenSSH per-connection server daemon (10.200.16.10:35050). Nov 23 23:25:25.094183 kubelet[3455]: E1123 23:25:25.094138 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:25:25.172441 sshd[5721]: Accepted publickey for core from 10.200.16.10 port 35050 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:25.174036 sshd-session[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:25.179229 systemd-logind[1871]: New session 20 of user core. Nov 23 23:25:25.184542 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 23 23:25:25.634266 sshd[5724]: Connection closed by 10.200.16.10 port 35050 Nov 23 23:25:25.635418 sshd-session[5721]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:25.638980 systemd[1]: sshd@17-10.200.20.43:22-10.200.16.10:35050.service: Deactivated successfully. Nov 23 23:25:25.642204 systemd[1]: session-20.scope: Deactivated successfully. Nov 23 23:25:25.645014 systemd-logind[1871]: Session 20 logged out. Waiting for processes to exit. Nov 23 23:25:25.646260 systemd-logind[1871]: Removed session 20. Nov 23 23:25:25.706312 systemd[1]: Started sshd@18-10.200.20.43:22-10.200.16.10:35062.service - OpenSSH per-connection server daemon (10.200.16.10:35062). Nov 23 23:25:26.124404 sshd[5734]: Accepted publickey for core from 10.200.16.10 port 35062 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:26.125397 sshd-session[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:26.129104 systemd-logind[1871]: New session 21 of user core. Nov 23 23:25:26.136351 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 23 23:25:26.513292 sshd[5737]: Connection closed by 10.200.16.10 port 35062 Nov 23 23:25:26.513944 sshd-session[5734]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:26.517556 systemd-logind[1871]: Session 21 logged out. Waiting for processes to exit. Nov 23 23:25:26.519564 systemd[1]: sshd@18-10.200.20.43:22-10.200.16.10:35062.service: Deactivated successfully. Nov 23 23:25:26.524888 systemd[1]: session-21.scope: Deactivated successfully. Nov 23 23:25:26.527502 systemd-logind[1871]: Removed session 21. Nov 23 23:25:28.095930 kubelet[3455]: E1123 23:25:28.095480 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cffccc6c-swdgq" podUID="f26811a6-5336-4c1a-bfb3-9c8fe093c60c" Nov 23 23:25:31.596493 systemd[1]: Started sshd@19-10.200.20.43:22-10.200.16.10:36814.service - OpenSSH per-connection server daemon (10.200.16.10:36814). Nov 23 23:25:32.053466 sshd[5753]: Accepted publickey for core from 10.200.16.10 port 36814 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:32.054521 sshd-session[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:32.060917 systemd-logind[1871]: New session 22 of user core. Nov 23 23:25:32.067914 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 23 23:25:32.098194 kubelet[3455]: E1123 23:25:32.098109 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:25:32.420074 sshd[5756]: Connection closed by 10.200.16.10 port 36814 Nov 23 23:25:32.420158 sshd-session[5753]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:32.424750 systemd[1]: sshd@19-10.200.20.43:22-10.200.16.10:36814.service: Deactivated successfully. Nov 23 23:25:32.426657 systemd[1]: session-22.scope: Deactivated successfully. Nov 23 23:25:32.427928 systemd-logind[1871]: Session 22 logged out. Waiting for processes to exit. Nov 23 23:25:32.429394 systemd-logind[1871]: Removed session 22. Nov 23 23:25:34.095094 kubelet[3455]: E1123 23:25:34.094841 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" podUID="7afbc6db-85a8-4a42-8680-e685d44be238" Nov 23 23:25:34.095094 kubelet[3455]: E1123 23:25:34.095002 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f" Nov 23 23:25:36.097130 kubelet[3455]: E1123 23:25:36.096999 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hdfkh" podUID="be1e52dc-0aab-46fe-876a-12be408713eb" Nov 23 23:25:37.504871 systemd[1]: Started sshd@20-10.200.20.43:22-10.200.16.10:36830.service - OpenSSH per-connection server daemon (10.200.16.10:36830). Nov 23 23:25:37.960707 sshd[5767]: Accepted publickey for core from 10.200.16.10 port 36830 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:37.961526 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:37.967281 systemd-logind[1871]: New session 23 of user core. Nov 23 23:25:37.971347 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 23 23:25:38.335319 sshd[5770]: Connection closed by 10.200.16.10 port 36830 Nov 23 23:25:38.336421 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:38.342651 systemd-logind[1871]: Session 23 logged out. Waiting for processes to exit. Nov 23 23:25:38.342739 systemd[1]: sshd@20-10.200.20.43:22-10.200.16.10:36830.service: Deactivated successfully. Nov 23 23:25:38.345754 systemd[1]: session-23.scope: Deactivated successfully. Nov 23 23:25:38.347941 systemd-logind[1871]: Removed session 23. Nov 23 23:25:39.095417 kubelet[3455]: E1123 23:25:39.095368 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:25:42.096053 kubelet[3455]: E1123 23:25:42.095989 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cffccc6c-swdgq" podUID="f26811a6-5336-4c1a-bfb3-9c8fe093c60c" Nov 23 23:25:43.411581 systemd[1]: Started sshd@21-10.200.20.43:22-10.200.16.10:44304.service - OpenSSH per-connection server daemon (10.200.16.10:44304). Nov 23 23:25:43.827758 sshd[5806]: Accepted publickey for core from 10.200.16.10 port 44304 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:43.829089 sshd-session[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:43.833074 systemd-logind[1871]: New session 24 of user core. Nov 23 23:25:43.840365 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 23 23:25:44.169881 sshd[5811]: Connection closed by 10.200.16.10 port 44304 Nov 23 23:25:44.169726 sshd-session[5806]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:44.173223 systemd[1]: sshd@21-10.200.20.43:22-10.200.16.10:44304.service: Deactivated successfully. Nov 23 23:25:44.175612 systemd[1]: session-24.scope: Deactivated successfully. Nov 23 23:25:44.176770 systemd-logind[1871]: Session 24 logged out. Waiting for processes to exit. Nov 23 23:25:44.178403 systemd-logind[1871]: Removed session 24. Nov 23 23:25:45.095476 kubelet[3455]: E1123 23:25:45.095385 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-mzswh" podUID="dc4e36a9-c245-455d-ada2-16c405b7bde8" Nov 23 23:25:46.096848 kubelet[3455]: E1123 23:25:46.096482 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f" Nov 23 23:25:46.097986 kubelet[3455]: E1123 23:25:46.097692 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-cgczn" podUID="7afbc6db-85a8-4a42-8680-e685d44be238" Nov 23 23:25:49.251351 systemd[1]: Started sshd@22-10.200.20.43:22-10.200.16.10:44318.service - OpenSSH per-connection server daemon (10.200.16.10:44318). Nov 23 23:25:49.676263 sshd[5825]: Accepted publickey for core from 10.200.16.10 port 44318 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:49.677091 sshd-session[5825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:49.681018 systemd-logind[1871]: New session 25 of user core. Nov 23 23:25:49.684356 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 23 23:25:50.039830 sshd[5828]: Connection closed by 10.200.16.10 port 44318 Nov 23 23:25:50.040732 sshd-session[5825]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:50.043709 systemd[1]: sshd@22-10.200.20.43:22-10.200.16.10:44318.service: Deactivated successfully. Nov 23 23:25:50.045149 systemd[1]: session-25.scope: Deactivated successfully. Nov 23 23:25:50.045813 systemd-logind[1871]: Session 25 logged out. Waiting for processes to exit. Nov 23 23:25:50.047035 systemd-logind[1871]: Removed session 25. Nov 23 23:25:51.094383 kubelet[3455]: E1123 23:25:51.094336 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-hdfkh" podUID="be1e52dc-0aab-46fe-876a-12be408713eb" Nov 23 23:25:53.095277 kubelet[3455]: E1123 23:25:53.095099 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5d4f684bcb-7dv89" podUID="f59281aa-b935-4d43-8373-69a621420431" Nov 23 23:25:53.097178 kubelet[3455]: E1123 23:25:53.097112 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84cffccc6c-swdgq" podUID="f26811a6-5336-4c1a-bfb3-9c8fe093c60c" Nov 23 23:25:55.120056 systemd[1]: Started sshd@23-10.200.20.43:22-10.200.16.10:52468.service - OpenSSH per-connection server daemon (10.200.16.10:52468). Nov 23 23:25:55.540660 sshd[5840]: Accepted publickey for core from 10.200.16.10 port 52468 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:55.542077 sshd-session[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:55.545484 systemd-logind[1871]: New session 26 of user core. Nov 23 23:25:55.552386 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 23 23:25:55.943426 sshd[5843]: Connection closed by 10.200.16.10 port 52468 Nov 23 23:25:55.963537 sshd-session[5840]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:55.966372 systemd[1]: sshd@23-10.200.20.43:22-10.200.16.10:52468.service: Deactivated successfully. Nov 23 23:25:55.968298 systemd[1]: session-26.scope: Deactivated successfully. Nov 23 23:25:55.969358 systemd-logind[1871]: Session 26 logged out. Waiting for processes to exit. Nov 23 23:25:55.970820 systemd-logind[1871]: Removed session 26. Nov 23 23:25:57.094172 kubelet[3455]: E1123 23:25:57.094135 3455 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56586bc88d-xbb5n" podUID="15476924-84af-4e25-8a82-221156412f5f"