Nov 23 23:21:20.057844 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Nov 23 23:21:20.057861 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:53:53 -00 2025 Nov 23 23:21:20.057867 kernel: KASLR enabled Nov 23 23:21:20.057871 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Nov 23 23:21:20.057875 kernel: printk: legacy bootconsole [pl11] enabled Nov 23 23:21:20.057880 kernel: efi: EFI v2.7 by EDK II Nov 23 23:21:20.057885 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Nov 23 23:21:20.057889 kernel: random: crng init done Nov 23 23:21:20.057893 kernel: secureboot: Secure boot disabled Nov 23 23:21:20.057896 kernel: ACPI: Early table checksum verification disabled Nov 23 23:21:20.057900 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Nov 23 23:21:20.057904 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:20.057908 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:20.057912 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 23 23:21:20.057918 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:20.057922 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:20.057926 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:20.057931 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:20.057935 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:20.057940 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:20.057944 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Nov 23 23:21:20.057957 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 23 23:21:20.057962 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Nov 23 23:21:20.057966 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 23:21:20.057970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 23 23:21:20.057974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Nov 23 23:21:20.057978 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Nov 23 23:21:20.057983 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 23 23:21:20.057987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 23 23:21:20.057991 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 23 23:21:20.057996 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 23 23:21:20.058000 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 23 23:21:20.058004 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 23 23:21:20.058009 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 23 23:21:20.058013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 23 23:21:20.058017 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 23 23:21:20.058021 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Nov 23 23:21:20.058025 kernel: NODE_DATA(0) allocated [mem 0x1bf7ffa00-0x1bf806fff] Nov 23 23:21:20.058029 kernel: Zone ranges: Nov 23 23:21:20.058034 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Nov 23 23:21:20.058040 kernel: DMA32 empty Nov 23 23:21:20.058044 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Nov 23 23:21:20.058049 kernel: Device empty Nov 23 23:21:20.058053 kernel: Movable zone start for each node Nov 23 23:21:20.058057 kernel: Early memory node ranges Nov 23 23:21:20.058062 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Nov 23 23:21:20.058067 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Nov 23 23:21:20.058071 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Nov 23 23:21:20.058076 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Nov 23 23:21:20.058080 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Nov 23 23:21:20.058084 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Nov 23 23:21:20.058088 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Nov 23 23:21:20.058093 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Nov 23 23:21:20.058097 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Nov 23 23:21:20.058101 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Nov 23 23:21:20.058106 kernel: psci: probing for conduit method from ACPI. Nov 23 23:21:20.058110 kernel: psci: PSCIv1.3 detected in firmware. Nov 23 23:21:20.058114 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 23:21:20.058119 kernel: psci: MIGRATE_INFO_TYPE not supported. Nov 23 23:21:20.058124 kernel: psci: SMC Calling Convention v1.4 Nov 23 23:21:20.058128 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Nov 23 23:21:20.058132 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Nov 23 23:21:20.058137 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 23:21:20.058141 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 23:21:20.058146 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 23 23:21:20.058150 kernel: Detected PIPT I-cache on CPU0 Nov 23 23:21:20.058154 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Nov 23 23:21:20.058159 kernel: CPU features: detected: GIC system register CPU interface Nov 23 23:21:20.058163 kernel: CPU features: detected: Spectre-v4 Nov 23 23:21:20.058167 kernel: CPU features: detected: Spectre-BHB Nov 23 23:21:20.058172 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 23 23:21:20.058177 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 23 23:21:20.058181 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Nov 23 23:21:20.058185 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 23 23:21:20.058190 kernel: alternatives: applying boot alternatives Nov 23 23:21:20.058195 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:21:20.058200 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 23:21:20.058204 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 23:21:20.058209 kernel: Fallback order for Node 0: 0 Nov 23 23:21:20.058213 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Nov 23 23:21:20.058218 kernel: Policy zone: Normal Nov 23 23:21:20.058222 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 23:21:20.058227 kernel: software IO TLB: area num 2. Nov 23 23:21:20.058231 kernel: software IO TLB: mapped [mem 0x0000000035900000-0x0000000039900000] (64MB) Nov 23 23:21:20.058236 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 23 23:21:20.058240 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 23:21:20.058245 kernel: rcu: RCU event tracing is enabled. Nov 23 23:21:20.058249 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 23 23:21:20.058254 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 23:21:20.058258 kernel: Tracing variant of Tasks RCU enabled. Nov 23 23:21:20.058263 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 23:21:20.058267 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 23 23:21:20.058272 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 23:21:20.058277 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 23:21:20.058281 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 23:21:20.058286 kernel: GICv3: 960 SPIs implemented Nov 23 23:21:20.058290 kernel: GICv3: 0 Extended SPIs implemented Nov 23 23:21:20.058294 kernel: Root IRQ handler: gic_handle_irq Nov 23 23:21:20.058298 kernel: GICv3: GICv3 features: 16 PPIs, RSS Nov 23 23:21:20.058303 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Nov 23 23:21:20.058307 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Nov 23 23:21:20.058311 kernel: ITS: No ITS available, not enabling LPIs Nov 23 23:21:20.058316 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 23:21:20.058321 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Nov 23 23:21:20.058325 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 23 23:21:20.058330 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Nov 23 23:21:20.058334 kernel: Console: colour dummy device 80x25 Nov 23 23:21:20.058339 kernel: printk: legacy console [tty1] enabled Nov 23 23:21:20.058343 kernel: ACPI: Core revision 20240827 Nov 23 23:21:20.058348 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Nov 23 23:21:20.058353 kernel: pid_max: default: 32768 minimum: 301 Nov 23 23:21:20.058357 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 23:21:20.058362 kernel: landlock: Up and running. Nov 23 23:21:20.058367 kernel: SELinux: Initializing. Nov 23 23:21:20.058371 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:21:20.058376 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:21:20.058380 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Nov 23 23:21:20.058385 kernel: Hyper-V: Host Build 10.0.26102.1141-1-0 Nov 23 23:21:20.058393 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 23 23:21:20.058398 kernel: rcu: Hierarchical SRCU implementation. Nov 23 23:21:20.058403 kernel: rcu: Max phase no-delay instances is 400. Nov 23 23:21:20.058408 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 23:21:20.058413 kernel: Remapping and enabling EFI services. Nov 23 23:21:20.058417 kernel: smp: Bringing up secondary CPUs ... Nov 23 23:21:20.058422 kernel: Detected PIPT I-cache on CPU1 Nov 23 23:21:20.058428 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Nov 23 23:21:20.058432 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Nov 23 23:21:20.058437 kernel: smp: Brought up 1 node, 2 CPUs Nov 23 23:21:20.058442 kernel: SMP: Total of 2 processors activated. Nov 23 23:21:20.058446 kernel: CPU: All CPU(s) started at EL1 Nov 23 23:21:20.058452 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 23:21:20.058457 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Nov 23 23:21:20.058462 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 23 23:21:20.058467 kernel: CPU features: detected: Common not Private translations Nov 23 23:21:20.058471 kernel: CPU features: detected: CRC32 instructions Nov 23 23:21:20.058476 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Nov 23 23:21:20.058481 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 23 23:21:20.058486 kernel: CPU features: detected: LSE atomic instructions Nov 23 23:21:20.058491 kernel: CPU features: detected: Privileged Access Never Nov 23 23:21:20.058496 kernel: CPU features: detected: Speculation barrier (SB) Nov 23 23:21:20.058501 kernel: CPU features: detected: TLB range maintenance instructions Nov 23 23:21:20.058506 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 23 23:21:20.058511 kernel: CPU features: detected: Scalable Vector Extension Nov 23 23:21:20.058516 kernel: alternatives: applying system-wide alternatives Nov 23 23:21:20.058520 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Nov 23 23:21:20.058525 kernel: SVE: maximum available vector length 16 bytes per vector Nov 23 23:21:20.058530 kernel: SVE: default vector length 16 bytes per vector Nov 23 23:21:20.058535 kernel: Memory: 3952828K/4194160K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 220144K reserved, 16384K cma-reserved) Nov 23 23:21:20.058541 kernel: devtmpfs: initialized Nov 23 23:21:20.058546 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 23:21:20.058550 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 23 23:21:20.058555 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 23 23:21:20.058560 kernel: 0 pages in range for non-PLT usage Nov 23 23:21:20.058565 kernel: 508400 pages in range for PLT usage Nov 23 23:21:20.058569 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 23:21:20.058574 kernel: SMBIOS 3.1.0 present. Nov 23 23:21:20.058580 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Nov 23 23:21:20.058584 kernel: DMI: Memory slots populated: 2/2 Nov 23 23:21:20.058589 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 23:21:20.058594 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 23:21:20.058599 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 23:21:20.058603 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 23:21:20.058608 kernel: audit: initializing netlink subsys (disabled) Nov 23 23:21:20.058613 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Nov 23 23:21:20.058618 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 23:21:20.058623 kernel: cpuidle: using governor menu Nov 23 23:21:20.058628 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 23:21:20.058633 kernel: ASID allocator initialised with 32768 entries Nov 23 23:21:20.058637 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 23:21:20.058642 kernel: Serial: AMBA PL011 UART driver Nov 23 23:21:20.058647 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 23:21:20.058651 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 23:21:20.058656 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 23:21:20.058661 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 23:21:20.058666 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 23:21:20.058671 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 23:21:20.058676 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 23:21:20.058680 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 23:21:20.058685 kernel: ACPI: Added _OSI(Module Device) Nov 23 23:21:20.058690 kernel: ACPI: Added _OSI(Processor Device) Nov 23 23:21:20.058694 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 23:21:20.058699 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 23:21:20.058704 kernel: ACPI: Interpreter enabled Nov 23 23:21:20.058709 kernel: ACPI: Using GIC for interrupt routing Nov 23 23:21:20.058714 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Nov 23 23:21:20.058719 kernel: printk: legacy console [ttyAMA0] enabled Nov 23 23:21:20.058723 kernel: printk: legacy bootconsole [pl11] disabled Nov 23 23:21:20.058728 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Nov 23 23:21:20.058733 kernel: ACPI: CPU0 has been hot-added Nov 23 23:21:20.058738 kernel: ACPI: CPU1 has been hot-added Nov 23 23:21:20.058742 kernel: iommu: Default domain type: Translated Nov 23 23:21:20.058747 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 23:21:20.058752 kernel: efivars: Registered efivars operations Nov 23 23:21:20.058757 kernel: vgaarb: loaded Nov 23 23:21:20.058762 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 23:21:20.058767 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 23:21:20.058771 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 23:21:20.058776 kernel: pnp: PnP ACPI init Nov 23 23:21:20.058780 kernel: pnp: PnP ACPI: found 0 devices Nov 23 23:21:20.058785 kernel: NET: Registered PF_INET protocol family Nov 23 23:21:20.058790 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 23:21:20.058795 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 23:21:20.058800 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 23:21:20.058805 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 23:21:20.058810 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 23:21:20.058814 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 23:21:20.058819 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:21:20.058824 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:21:20.058829 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 23:21:20.058833 kernel: PCI: CLS 0 bytes, default 64 Nov 23 23:21:20.058838 kernel: kvm [1]: HYP mode not available Nov 23 23:21:20.058843 kernel: Initialise system trusted keyrings Nov 23 23:21:20.058848 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 23:21:20.058853 kernel: Key type asymmetric registered Nov 23 23:21:20.058857 kernel: Asymmetric key parser 'x509' registered Nov 23 23:21:20.058862 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 23:21:20.058867 kernel: io scheduler mq-deadline registered Nov 23 23:21:20.058872 kernel: io scheduler kyber registered Nov 23 23:21:20.058876 kernel: io scheduler bfq registered Nov 23 23:21:20.058881 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 23:21:20.058886 kernel: thunder_xcv, ver 1.0 Nov 23 23:21:20.058891 kernel: thunder_bgx, ver 1.0 Nov 23 23:21:20.058896 kernel: nicpf, ver 1.0 Nov 23 23:21:20.058900 kernel: nicvf, ver 1.0 Nov 23 23:21:20.059002 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 23:21:20.059052 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T23:21:19 UTC (1763940079) Nov 23 23:21:20.059059 kernel: efifb: probing for efifb Nov 23 23:21:20.059065 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 23 23:21:20.059070 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 23 23:21:20.059075 kernel: efifb: scrolling: redraw Nov 23 23:21:20.059079 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 23 23:21:20.059084 kernel: Console: switching to colour frame buffer device 128x48 Nov 23 23:21:20.059089 kernel: fb0: EFI VGA frame buffer device Nov 23 23:21:20.059094 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Nov 23 23:21:20.059098 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 23:21:20.059103 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 23 23:21:20.059109 kernel: watchdog: NMI not fully supported Nov 23 23:21:20.059114 kernel: watchdog: Hard watchdog permanently disabled Nov 23 23:21:20.059118 kernel: NET: Registered PF_INET6 protocol family Nov 23 23:21:20.059123 kernel: Segment Routing with IPv6 Nov 23 23:21:20.059128 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 23:21:20.059132 kernel: NET: Registered PF_PACKET protocol family Nov 23 23:21:20.059137 kernel: Key type dns_resolver registered Nov 23 23:21:20.059142 kernel: registered taskstats version 1 Nov 23 23:21:20.059146 kernel: Loading compiled-in X.509 certificates Nov 23 23:21:20.059151 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 00c36da29593053a7da9cd3c5945ae69451ce339' Nov 23 23:21:20.059157 kernel: Demotion targets for Node 0: null Nov 23 23:21:20.059161 kernel: Key type .fscrypt registered Nov 23 23:21:20.059166 kernel: Key type fscrypt-provisioning registered Nov 23 23:21:20.059170 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 23:21:20.059175 kernel: ima: Allocated hash algorithm: sha1 Nov 23 23:21:20.059180 kernel: ima: No architecture policies found Nov 23 23:21:20.059184 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 23:21:20.059189 kernel: clk: Disabling unused clocks Nov 23 23:21:20.059194 kernel: PM: genpd: Disabling unused power domains Nov 23 23:21:20.059199 kernel: Warning: unable to open an initial console. Nov 23 23:21:20.059204 kernel: Freeing unused kernel memory: 39552K Nov 23 23:21:20.059209 kernel: Run /init as init process Nov 23 23:21:20.059214 kernel: with arguments: Nov 23 23:21:20.059218 kernel: /init Nov 23 23:21:20.059223 kernel: with environment: Nov 23 23:21:20.059227 kernel: HOME=/ Nov 23 23:21:20.059232 kernel: TERM=linux Nov 23 23:21:20.059237 systemd[1]: Successfully made /usr/ read-only. Nov 23 23:21:20.059245 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:21:20.059250 systemd[1]: Detected virtualization microsoft. Nov 23 23:21:20.059255 systemd[1]: Detected architecture arm64. Nov 23 23:21:20.059260 systemd[1]: Running in initrd. Nov 23 23:21:20.059265 systemd[1]: No hostname configured, using default hostname. Nov 23 23:21:20.059271 systemd[1]: Hostname set to . Nov 23 23:21:20.059276 systemd[1]: Initializing machine ID from random generator. Nov 23 23:21:20.059281 systemd[1]: Queued start job for default target initrd.target. Nov 23 23:21:20.059287 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:21:20.059292 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:21:20.059297 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 23:21:20.059303 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:21:20.059308 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 23:21:20.059314 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 23:21:20.059320 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 23:21:20.059325 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 23:21:20.059331 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:21:20.059336 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:21:20.059341 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:21:20.059346 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:21:20.059351 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:21:20.059356 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:21:20.059362 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:21:20.059368 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:21:20.059373 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 23:21:20.059378 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 23:21:20.059383 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:21:20.059388 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:21:20.059393 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:21:20.059398 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:21:20.059404 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 23:21:20.059409 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:21:20.059415 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 23:21:20.059420 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 23:21:20.059425 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 23:21:20.059431 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:21:20.059436 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:21:20.059450 systemd-journald[225]: Collecting audit messages is disabled. Nov 23 23:21:20.059464 systemd-journald[225]: Journal started Nov 23 23:21:20.059478 systemd-journald[225]: Runtime Journal (/run/log/journal/577fc5438b724cd18d812b82f9fd228a) is 8M, max 78.3M, 70.3M free. Nov 23 23:21:20.069462 systemd-modules-load[227]: Inserted module 'overlay' Nov 23 23:21:20.078182 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:21:20.091311 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 23:21:20.098316 kernel: Bridge firewalling registered Nov 23 23:21:20.098339 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:21:20.095936 systemd-modules-load[227]: Inserted module 'br_netfilter' Nov 23 23:21:20.106546 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 23:21:20.115946 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:21:20.126390 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 23:21:20.130024 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:21:20.137821 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:21:20.148988 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 23:21:20.168080 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:21:20.180218 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:21:20.195804 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:21:20.202972 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:21:20.221217 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:21:20.228105 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:21:20.235179 systemd-tmpfiles[257]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 23:21:20.238164 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:21:20.255080 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 23:21:20.277621 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:21:20.287677 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:21:20.305073 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:21:20.317214 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:21:20.350628 systemd-resolved[264]: Positive Trust Anchors: Nov 23 23:21:20.350644 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:21:20.350664 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:21:20.352537 systemd-resolved[264]: Defaulting to hostname 'linux'. Nov 23 23:21:20.353947 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:21:20.359374 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:21:20.449968 kernel: SCSI subsystem initialized Nov 23 23:21:20.454957 kernel: Loading iSCSI transport class v2.0-870. Nov 23 23:21:20.462968 kernel: iscsi: registered transport (tcp) Nov 23 23:21:20.475120 kernel: iscsi: registered transport (qla4xxx) Nov 23 23:21:20.475131 kernel: QLogic iSCSI HBA Driver Nov 23 23:21:20.488125 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:21:20.508006 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:21:20.514367 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:21:20.559592 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 23:21:20.564874 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 23:21:20.629965 kernel: raid6: neonx8 gen() 18543 MB/s Nov 23 23:21:20.646957 kernel: raid6: neonx4 gen() 18556 MB/s Nov 23 23:21:20.665958 kernel: raid6: neonx2 gen() 17059 MB/s Nov 23 23:21:20.685958 kernel: raid6: neonx1 gen() 15133 MB/s Nov 23 23:21:20.704958 kernel: raid6: int64x8 gen() 10562 MB/s Nov 23 23:21:20.723971 kernel: raid6: int64x4 gen() 10611 MB/s Nov 23 23:21:20.743962 kernel: raid6: int64x2 gen() 8970 MB/s Nov 23 23:21:20.765151 kernel: raid6: int64x1 gen() 7012 MB/s Nov 23 23:21:20.765160 kernel: raid6: using algorithm neonx4 gen() 18556 MB/s Nov 23 23:21:20.787108 kernel: raid6: .... xor() 15151 MB/s, rmw enabled Nov 23 23:21:20.787115 kernel: raid6: using neon recovery algorithm Nov 23 23:21:20.795913 kernel: xor: measuring software checksum speed Nov 23 23:21:20.795923 kernel: 8regs : 28649 MB/sec Nov 23 23:21:20.798460 kernel: 32regs : 28779 MB/sec Nov 23 23:21:20.800966 kernel: arm64_neon : 37535 MB/sec Nov 23 23:21:20.803971 kernel: xor: using function: arm64_neon (37535 MB/sec) Nov 23 23:21:20.841979 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 23:21:20.846715 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:21:20.856081 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:21:20.886982 systemd-udevd[475]: Using default interface naming scheme 'v255'. Nov 23 23:21:20.890954 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:21:20.902703 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 23:21:20.928631 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Nov 23 23:21:20.945974 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:21:20.951627 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:21:21.001016 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:21:21.009713 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 23:21:21.078481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:21:21.086177 kernel: hv_vmbus: Vmbus version:5.3 Nov 23 23:21:21.078568 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:21:21.118658 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 23 23:21:21.118681 kernel: hv_vmbus: registering driver hid_hyperv Nov 23 23:21:21.118696 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 23 23:21:21.098222 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:21:21.144697 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Nov 23 23:21:21.144713 kernel: hv_vmbus: registering driver hv_netvsc Nov 23 23:21:21.144719 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 23 23:21:21.144839 kernel: hv_vmbus: registering driver hv_storvsc Nov 23 23:21:21.102810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:21:21.152573 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 23 23:21:21.139803 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:21:21.171733 kernel: PTP clock support registered Nov 23 23:21:21.171749 kernel: scsi host1: storvsc_host_t Nov 23 23:21:21.172143 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Nov 23 23:21:21.160512 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:21:21.188196 kernel: scsi host0: storvsc_host_t Nov 23 23:21:21.188318 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 23 23:21:21.188335 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 23 23:21:21.160594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:21:21.179203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:21:21.215029 kernel: hv_utils: Registering HyperV Utility Driver Nov 23 23:21:21.215059 kernel: hv_vmbus: registering driver hv_utils Nov 23 23:21:21.219961 kernel: hv_utils: Heartbeat IC version 3.0 Nov 23 23:21:21.219987 kernel: hv_utils: Shutdown IC version 3.2 Nov 23 23:21:21.723662 kernel: hv_utils: TimeSync IC version 4.0 Nov 23 23:21:21.723692 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 23 23:21:21.716227 systemd-resolved[264]: Clock change detected. Flushing caches. Nov 23 23:21:21.730521 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 23 23:21:21.730645 kernel: hv_netvsc 000d3af6-002b-000d-3af6-002b000d3af6 eth0: VF slot 1 added Nov 23 23:21:21.737471 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 23 23:21:21.737605 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 23 23:21:21.742447 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 23 23:21:21.743493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:21:21.765157 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#194 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 23 23:21:21.765274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#201 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 23 23:21:21.774605 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:21:21.774627 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 23 23:21:21.779604 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 23 23:21:21.779745 kernel: hv_vmbus: registering driver hv_pci Nov 23 23:21:21.779754 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 23 23:21:21.788104 kernel: hv_pci 8bb545c2-9b62-4612-ab57-a135bcf8ecb5: PCI VMBus probing: Using version 0x10004 Nov 23 23:21:21.789311 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 23 23:21:21.802309 kernel: hv_pci 8bb545c2-9b62-4612-ab57-a135bcf8ecb5: PCI host bridge to bus 9b62:00 Nov 23 23:21:21.802428 kernel: pci_bus 9b62:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Nov 23 23:21:21.802507 kernel: pci_bus 9b62:00: No busn resource found for root bus, will use [bus 00-ff] Nov 23 23:21:21.809490 kernel: pci 9b62:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Nov 23 23:21:21.820553 kernel: pci 9b62:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 23 23:21:21.825347 kernel: pci 9b62:00:02.0: enabling Extended Tags Nov 23 23:21:21.839373 kernel: pci 9b62:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9b62:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Nov 23 23:21:21.848465 kernel: pci_bus 9b62:00: busn_res: [bus 00-ff] end is updated to 00 Nov 23 23:21:21.848585 kernel: pci 9b62:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Nov 23 23:21:21.861324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#164 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 23 23:21:21.880316 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#137 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 23 23:21:21.923696 kernel: mlx5_core 9b62:00:02.0: enabling device (0000 -> 0002) Nov 23 23:21:21.931532 kernel: mlx5_core 9b62:00:02.0: PTM is not supported by PCIe Nov 23 23:21:21.931682 kernel: mlx5_core 9b62:00:02.0: firmware version: 16.30.5006 Nov 23 23:21:22.100311 kernel: hv_netvsc 000d3af6-002b-000d-3af6-002b000d3af6 eth0: VF registering: eth1 Nov 23 23:21:22.100482 kernel: mlx5_core 9b62:00:02.0 eth1: joined to eth0 Nov 23 23:21:22.105402 kernel: mlx5_core 9b62:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Nov 23 23:21:22.120307 kernel: mlx5_core 9b62:00:02.0 enP39778s1: renamed from eth1 Nov 23 23:21:22.283193 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 23 23:21:22.366255 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 23 23:21:22.377442 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 23 23:21:22.397795 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 23 23:21:22.402755 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 23 23:21:22.412443 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 23:21:22.423432 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:21:22.432202 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:21:22.441520 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:21:22.455420 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 23:21:22.462400 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 23:21:22.489309 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#176 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 23 23:21:22.489357 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:21:22.504556 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:21:22.511352 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#129 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 23 23:21:22.518311 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:21:23.526427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#144 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 23 23:21:23.538315 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:21:23.540026 disk-uuid[661]: The operation has completed successfully. Nov 23 23:21:23.615443 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 23:21:23.615520 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 23:21:23.628410 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 23:21:23.651545 sh[821]: Success Nov 23 23:21:23.685171 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 23:21:23.685212 kernel: device-mapper: uevent: version 1.0.3 Nov 23 23:21:23.690762 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 23:21:23.699306 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 23:21:23.947826 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 23:21:23.956289 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 23:21:23.967144 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 23:21:23.990378 kernel: BTRFS: device fsid 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (839) Nov 23 23:21:24.000916 kernel: BTRFS info (device dm-0): first mount of filesystem 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 Nov 23 23:21:24.000954 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:21:24.265000 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 23:21:24.265075 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 23:21:24.305256 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 23:21:24.309739 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:21:24.317524 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 23:21:24.318145 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 23:21:24.339827 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 23:21:24.371334 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (877) Nov 23 23:21:24.381983 kernel: BTRFS info (device sda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:21:24.382013 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:21:24.408198 kernel: BTRFS info (device sda6): turning on async discard Nov 23 23:21:24.408231 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 23:21:24.417331 kernel: BTRFS info (device sda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:21:24.417748 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 23:21:24.425425 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 23:21:24.455386 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:21:24.467266 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:21:24.499971 systemd-networkd[1008]: lo: Link UP Nov 23 23:21:24.499982 systemd-networkd[1008]: lo: Gained carrier Nov 23 23:21:24.500689 systemd-networkd[1008]: Enumeration completed Nov 23 23:21:24.502728 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:21:24.502955 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:21:24.502958 systemd-networkd[1008]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:21:24.507714 systemd[1]: Reached target network.target - Network. Nov 23 23:21:24.581325 kernel: mlx5_core 9b62:00:02.0 enP39778s1: Link up Nov 23 23:21:24.616356 kernel: hv_netvsc 000d3af6-002b-000d-3af6-002b000d3af6 eth0: Data path switched to VF: enP39778s1 Nov 23 23:21:24.616100 systemd-networkd[1008]: enP39778s1: Link UP Nov 23 23:21:24.616154 systemd-networkd[1008]: eth0: Link UP Nov 23 23:21:24.616284 systemd-networkd[1008]: eth0: Gained carrier Nov 23 23:21:24.616320 systemd-networkd[1008]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:21:24.635834 systemd-networkd[1008]: enP39778s1: Gained carrier Nov 23 23:21:24.645323 systemd-networkd[1008]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 23 23:21:25.708971 ignition[973]: Ignition 2.22.0 Nov 23 23:21:25.708986 ignition[973]: Stage: fetch-offline Nov 23 23:21:25.713016 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:21:25.709075 ignition[973]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:21:25.721034 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 23 23:21:25.709081 ignition[973]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:21:25.709204 ignition[973]: parsed url from cmdline: "" Nov 23 23:21:25.709208 ignition[973]: no config URL provided Nov 23 23:21:25.709211 ignition[973]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:21:25.709218 ignition[973]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:21:25.709221 ignition[973]: failed to fetch config: resource requires networking Nov 23 23:21:25.709486 ignition[973]: Ignition finished successfully Nov 23 23:21:25.758014 ignition[1019]: Ignition 2.22.0 Nov 23 23:21:25.758018 ignition[1019]: Stage: fetch Nov 23 23:21:25.758207 ignition[1019]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:21:25.758214 ignition[1019]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:21:25.758277 ignition[1019]: parsed url from cmdline: "" Nov 23 23:21:25.758279 ignition[1019]: no config URL provided Nov 23 23:21:25.758287 ignition[1019]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:21:25.758300 ignition[1019]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:21:25.758314 ignition[1019]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 23 23:21:25.829528 ignition[1019]: GET result: OK Nov 23 23:21:25.829600 ignition[1019]: config has been read from IMDS userdata Nov 23 23:21:25.829621 ignition[1019]: parsing config with SHA512: ee4900a67a24514e9b0f82a1eda7ec2932b61cffe8609f6bdd03f43c0719ad7e7db228b7e94076ac63f37c365a4a9441e1799511ba41240c595bb630c9430fbe Nov 23 23:21:25.836292 unknown[1019]: fetched base config from "system" Nov 23 23:21:25.837541 ignition[1019]: fetch: fetch complete Nov 23 23:21:25.836422 unknown[1019]: fetched base config from "system" Nov 23 23:21:25.837545 ignition[1019]: fetch: fetch passed Nov 23 23:21:25.836452 unknown[1019]: fetched user config from "azure" Nov 23 23:21:25.837589 ignition[1019]: Ignition finished successfully Nov 23 23:21:25.839968 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 23 23:21:25.850183 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 23:21:25.885791 ignition[1025]: Ignition 2.22.0 Nov 23 23:21:25.888483 ignition[1025]: Stage: kargs Nov 23 23:21:25.888661 ignition[1025]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:21:25.892664 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 23:21:25.888668 ignition[1025]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:21:25.904888 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 23:21:25.889154 ignition[1025]: kargs: kargs passed Nov 23 23:21:25.889191 ignition[1025]: Ignition finished successfully Nov 23 23:21:25.937268 ignition[1031]: Ignition 2.22.0 Nov 23 23:21:25.937282 ignition[1031]: Stage: disks Nov 23 23:21:25.941746 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 23:21:25.937477 ignition[1031]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:21:25.948201 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 23:21:25.937484 ignition[1031]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:21:25.956729 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 23:21:25.937993 ignition[1031]: disks: disks passed Nov 23 23:21:25.965951 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:21:25.938030 ignition[1031]: Ignition finished successfully Nov 23 23:21:25.974896 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:21:25.984049 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:21:25.994949 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 23:21:26.083713 systemd-fsck[1039]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Nov 23 23:21:26.093606 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 23:21:26.100756 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 23:21:26.346312 kernel: EXT4-fs (sda9): mounted filesystem fa3f8731-d4e3-4e51-b6db-fa404206cf07 r/w with ordered data mode. Quota mode: none. Nov 23 23:21:26.347121 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 23:21:26.351122 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 23:21:26.374145 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:21:26.381194 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 23:21:26.396274 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 23 23:21:26.406807 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 23:21:26.406834 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:21:26.413451 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 23:21:26.430541 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 23:21:26.450465 systemd-networkd[1008]: eth0: Gained IPv6LL Nov 23 23:21:26.460314 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1054) Nov 23 23:21:26.470480 kernel: BTRFS info (device sda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:21:26.470504 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:21:26.481468 kernel: BTRFS info (device sda6): turning on async discard Nov 23 23:21:26.481501 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 23:21:26.482758 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:21:26.900664 coreos-metadata[1056]: Nov 23 23:21:26.900 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 23 23:21:26.909821 coreos-metadata[1056]: Nov 23 23:21:26.909 INFO Fetch successful Nov 23 23:21:26.914515 coreos-metadata[1056]: Nov 23 23:21:26.909 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 23 23:21:26.923794 coreos-metadata[1056]: Nov 23 23:21:26.918 INFO Fetch successful Nov 23 23:21:26.933368 coreos-metadata[1056]: Nov 23 23:21:26.933 INFO wrote hostname ci-4459.2.1-a-2a92a9cf5f to /sysroot/etc/hostname Nov 23 23:21:26.941524 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 23 23:21:27.242022 initrd-setup-root[1086]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 23:21:27.279800 initrd-setup-root[1093]: cut: /sysroot/etc/group: No such file or directory Nov 23 23:21:27.298310 initrd-setup-root[1100]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 23:21:27.303317 initrd-setup-root[1107]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 23:21:28.282672 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 23:21:28.288009 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 23:21:28.308835 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 23:21:28.319803 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 23:21:28.328955 kernel: BTRFS info (device sda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:21:28.353342 ignition[1174]: INFO : Ignition 2.22.0 Nov 23 23:21:28.353342 ignition[1174]: INFO : Stage: mount Nov 23 23:21:28.353342 ignition[1174]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:21:28.353342 ignition[1174]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:21:28.351821 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 23:21:28.385322 ignition[1174]: INFO : mount: mount passed Nov 23 23:21:28.385322 ignition[1174]: INFO : Ignition finished successfully Nov 23 23:21:28.358219 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 23:21:28.365377 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 23:21:28.396384 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:21:28.424313 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1186) Nov 23 23:21:28.433991 kernel: BTRFS info (device sda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:21:28.434020 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:21:28.443524 kernel: BTRFS info (device sda6): turning on async discard Nov 23 23:21:28.443538 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 23:21:28.444853 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:21:28.473622 ignition[1204]: INFO : Ignition 2.22.0 Nov 23 23:21:28.473622 ignition[1204]: INFO : Stage: files Nov 23 23:21:28.479408 ignition[1204]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:21:28.479408 ignition[1204]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:21:28.479408 ignition[1204]: DEBUG : files: compiled without relabeling support, skipping Nov 23 23:21:28.507134 ignition[1204]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 23:21:28.507134 ignition[1204]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 23:21:28.579843 ignition[1204]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 23:21:28.585333 ignition[1204]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 23:21:28.585333 ignition[1204]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 23:21:28.580145 unknown[1204]: wrote ssh authorized keys file for user: core Nov 23 23:21:28.626677 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 23:21:28.634265 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 23 23:21:28.659043 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 23:21:28.781292 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 23:21:28.781292 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 23:21:28.781292 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 23:21:28.781292 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:21:28.781292 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:21:28.781292 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:21:28.781292 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:21:28.781292 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:21:28.781292 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:21:28.845032 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:21:28.845032 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:21:28.845032 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:21:28.845032 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:21:28.845032 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:21:28.845032 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 23 23:21:29.329533 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 23:21:29.601273 ignition[1204]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:21:29.601273 ignition[1204]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 23:21:29.617000 ignition[1204]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:21:29.630838 ignition[1204]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:21:29.630838 ignition[1204]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 23:21:29.630838 ignition[1204]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 23 23:21:29.649759 ignition[1204]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 23:21:29.649759 ignition[1204]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:21:29.649759 ignition[1204]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:21:29.649759 ignition[1204]: INFO : files: files passed Nov 23 23:21:29.649759 ignition[1204]: INFO : Ignition finished successfully Nov 23 23:21:29.644367 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 23:21:29.655115 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 23:21:29.680736 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 23:21:29.694123 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 23:21:29.694476 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 23:21:29.717659 initrd-setup-root-after-ignition[1233]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:21:29.717659 initrd-setup-root-after-ignition[1233]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:21:29.735391 initrd-setup-root-after-ignition[1237]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:21:29.719051 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:21:29.729264 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 23:21:29.740226 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 23:21:29.777658 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 23:21:29.777740 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 23:21:29.786605 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 23:21:29.795404 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 23:21:29.803332 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 23:21:29.803817 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 23:21:29.835749 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:21:29.841894 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 23:21:29.870082 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:21:29.875064 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:21:29.884205 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 23:21:29.892555 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 23:21:29.892635 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:21:29.904417 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 23:21:29.913159 systemd[1]: Stopped target basic.target - Basic System. Nov 23 23:21:29.920624 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 23:21:29.928779 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:21:29.937832 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 23:21:29.947155 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:21:29.956094 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 23:21:29.964260 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:21:29.973087 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 23:21:29.981914 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 23:21:29.990444 systemd[1]: Stopped target swap.target - Swaps. Nov 23 23:21:29.997321 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 23:21:29.997422 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:21:30.008457 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:21:30.013082 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:21:30.022078 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 23:21:30.022142 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:21:30.031155 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 23:21:30.031233 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 23:21:30.043909 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 23:21:30.044000 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:21:30.054844 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 23:21:30.054916 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 23:21:30.063913 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 23 23:21:30.063976 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 23 23:21:30.075790 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 23:21:30.106435 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 23:21:30.123976 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 23:21:30.124083 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:21:30.150672 ignition[1257]: INFO : Ignition 2.22.0 Nov 23 23:21:30.150672 ignition[1257]: INFO : Stage: umount Nov 23 23:21:30.150672 ignition[1257]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:21:30.150672 ignition[1257]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 23 23:21:30.150672 ignition[1257]: INFO : umount: umount passed Nov 23 23:21:30.150672 ignition[1257]: INFO : Ignition finished successfully Nov 23 23:21:30.133261 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 23:21:30.134253 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:21:30.151291 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 23:21:30.151382 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 23:21:30.159964 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 23:21:30.160146 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 23:21:30.167600 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 23:21:30.167637 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 23:21:30.176549 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 23 23:21:30.176579 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 23 23:21:30.180977 systemd[1]: Stopped target network.target - Network. Nov 23 23:21:30.191846 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 23:21:30.191899 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:21:30.200984 systemd[1]: Stopped target paths.target - Path Units. Nov 23 23:21:30.209942 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 23:21:30.213313 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:21:30.219862 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 23:21:30.224443 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 23:21:30.233498 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 23:21:30.233542 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:21:30.242004 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 23:21:30.242052 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:21:30.249922 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 23:21:30.249970 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 23:21:30.258307 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 23:21:30.258340 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 23:21:30.267139 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 23:21:30.275757 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 23:21:30.290091 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 23:21:30.294032 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 23:21:30.294109 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 23:21:30.307749 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 23:21:30.307930 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 23:21:30.311873 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 23:21:30.321136 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 23:21:30.484269 kernel: hv_netvsc 000d3af6-002b-000d-3af6-002b000d3af6 eth0: Data path switched from VF: enP39778s1 Nov 23 23:21:30.321328 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 23:21:30.321398 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 23:21:30.332171 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 23:21:30.341073 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 23:21:30.341110 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:21:30.354452 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 23:21:30.367506 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 23:21:30.367558 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:21:30.376021 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 23:21:30.376062 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:21:30.384342 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 23:21:30.384375 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 23:21:30.388983 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 23:21:30.389010 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:21:30.401494 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:21:30.410141 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 23:21:30.410193 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:21:30.427182 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 23:21:30.427485 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:21:30.435889 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 23:21:30.435925 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 23:21:30.443925 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 23:21:30.443945 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:21:30.452606 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 23:21:30.452637 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:21:30.465429 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 23:21:30.471602 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 23:21:30.484332 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 23:21:30.484396 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:21:30.494041 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 23:21:30.509416 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 23:21:30.509473 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:21:30.522989 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 23:21:30.523030 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:21:30.536776 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 23 23:21:30.536817 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:21:30.547478 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 23:21:30.547512 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:21:30.557114 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:21:30.557150 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:21:30.571126 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 23 23:21:30.571166 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 23 23:21:30.571188 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 23 23:21:30.571214 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:21:30.571472 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 23:21:30.571551 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 23:21:30.578160 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 23:21:30.578223 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 23:21:30.605382 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 23:21:30.607326 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 23:21:30.613917 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 23:21:30.621447 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 23:21:30.621495 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 23:21:30.631228 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 23:21:30.652935 systemd[1]: Switching root. Nov 23 23:21:30.998072 systemd-journald[225]: Journal stopped Nov 23 23:21:35.395958 systemd-journald[225]: Received SIGTERM from PID 1 (systemd). Nov 23 23:21:35.395976 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 23:21:35.395984 kernel: SELinux: policy capability open_perms=1 Nov 23 23:21:35.395990 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 23:21:35.395997 kernel: SELinux: policy capability always_check_network=0 Nov 23 23:21:35.396002 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 23:21:35.396008 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 23:21:35.396013 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 23:21:35.396018 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 23:21:35.396023 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 23:21:35.396029 kernel: audit: type=1403 audit(1763940091.856:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 23:21:35.396035 systemd[1]: Successfully loaded SELinux policy in 167.145ms. Nov 23 23:21:35.396042 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.283ms. Nov 23 23:21:35.396048 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:21:35.396055 systemd[1]: Detected virtualization microsoft. Nov 23 23:21:35.396062 systemd[1]: Detected architecture arm64. Nov 23 23:21:35.396067 systemd[1]: Detected first boot. Nov 23 23:21:35.396073 systemd[1]: Hostname set to . Nov 23 23:21:35.396079 systemd[1]: Initializing machine ID from random generator. Nov 23 23:21:35.396085 zram_generator::config[1299]: No configuration found. Nov 23 23:21:35.396091 kernel: NET: Registered PF_VSOCK protocol family Nov 23 23:21:35.396097 systemd[1]: Populated /etc with preset unit settings. Nov 23 23:21:35.396103 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 23:21:35.396110 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 23:21:35.396116 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 23:21:35.396122 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 23:21:35.396128 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 23:21:35.396134 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 23:21:35.396140 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 23:21:35.396146 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 23:21:35.396153 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 23:21:35.396159 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 23:21:35.396165 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 23:21:35.396171 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 23:21:35.396177 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:21:35.396183 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:21:35.396189 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 23:21:35.396195 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 23:21:35.396202 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 23:21:35.396208 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:21:35.396216 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 23 23:21:35.396222 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:21:35.396228 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:21:35.396234 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 23:21:35.396240 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 23:21:35.396246 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 23:21:35.396254 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 23:21:35.396260 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:21:35.396266 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:21:35.396272 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:21:35.396278 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:21:35.396284 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 23:21:35.396290 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 23:21:35.396308 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 23:21:35.396314 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:21:35.396321 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:21:35.396327 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:21:35.396333 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 23:21:35.396339 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 23:21:35.396346 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 23:21:35.396352 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 23:21:35.396358 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 23:21:35.396365 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 23:21:35.396371 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 23:21:35.396377 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 23:21:35.396383 systemd[1]: Reached target machines.target - Containers. Nov 23 23:21:35.396389 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 23:21:35.396397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:21:35.396403 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:21:35.396410 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 23:21:35.396416 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:21:35.396422 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:21:35.396428 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:21:35.396434 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 23:21:35.396440 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:21:35.396448 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 23:21:35.396454 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 23:21:35.396460 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 23:21:35.396466 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 23:21:35.396472 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 23:21:35.396479 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:21:35.396485 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:21:35.396491 kernel: fuse: init (API version 7.41) Nov 23 23:21:35.396497 kernel: loop: module loaded Nov 23 23:21:35.396503 kernel: ACPI: bus type drm_connector registered Nov 23 23:21:35.396509 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:21:35.396515 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:21:35.396532 systemd-journald[1396]: Collecting audit messages is disabled. Nov 23 23:21:35.396547 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 23:21:35.396554 systemd-journald[1396]: Journal started Nov 23 23:21:35.396568 systemd-journald[1396]: Runtime Journal (/run/log/journal/2781e4b06a5d49c79728a68c957bc830) is 8M, max 78.3M, 70.3M free. Nov 23 23:21:34.642565 systemd[1]: Queued start job for default target multi-user.target. Nov 23 23:21:34.646668 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 23 23:21:34.647009 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 23:21:34.648439 systemd[1]: systemd-journald.service: Consumed 2.447s CPU time. Nov 23 23:21:35.417429 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 23:21:35.431075 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:21:35.437888 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 23:21:35.437917 systemd[1]: Stopped verity-setup.service. Nov 23 23:21:35.450914 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:21:35.451591 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 23:21:35.456074 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 23:21:35.460986 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 23:21:35.465158 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 23:21:35.469586 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 23:21:35.474273 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 23:21:35.478677 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 23:21:35.486318 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:21:35.491836 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 23:21:35.492020 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 23:21:35.496824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:21:35.497008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:21:35.501635 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:21:35.501803 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:21:35.506598 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:21:35.506777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:21:35.511965 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 23:21:35.512144 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 23:21:35.516787 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:21:35.516895 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:21:35.521630 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:21:35.526947 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:21:35.533278 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 23:21:35.548420 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:21:35.572585 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 23:21:35.585364 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 23:21:35.590000 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 23:21:35.590024 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:21:35.594652 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 23:21:35.603925 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 23:21:35.608461 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:21:35.611406 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 23:21:35.624010 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 23:21:35.629436 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:21:35.631408 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 23:21:35.637494 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:21:35.639142 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:21:35.646363 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 23:21:35.654139 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:21:35.665607 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 23:21:35.672802 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:21:35.679762 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 23:21:35.684624 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 23:21:35.689697 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 23:21:35.698854 systemd-journald[1396]: Time spent on flushing to /var/log/journal/2781e4b06a5d49c79728a68c957bc830 is 11.087ms for 940 entries. Nov 23 23:21:35.698854 systemd-journald[1396]: System Journal (/var/log/journal/2781e4b06a5d49c79728a68c957bc830) is 8M, max 2.6G, 2.6G free. Nov 23 23:21:35.755074 kernel: loop0: detected capacity change from 0 to 207008 Nov 23 23:21:35.755122 systemd-journald[1396]: Received client request to flush runtime journal. Nov 23 23:21:35.699564 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 23:21:35.709821 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 23:21:35.716314 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:21:35.742055 systemd-tmpfiles[1437]: ACLs are not supported, ignoring. Nov 23 23:21:35.742062 systemd-tmpfiles[1437]: ACLs are not supported, ignoring. Nov 23 23:21:35.744604 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:21:35.751239 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 23:21:35.762766 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 23:21:35.781472 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 23:21:35.781925 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 23:21:35.787930 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 23:21:35.841319 kernel: loop1: detected capacity change from 0 to 27936 Nov 23 23:21:35.891049 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 23:21:35.897577 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:21:35.915811 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Nov 23 23:21:35.916023 systemd-tmpfiles[1458]: ACLs are not supported, ignoring. Nov 23 23:21:35.918149 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:21:36.321362 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 23:21:36.327647 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:21:36.353493 systemd-udevd[1463]: Using default interface naming scheme 'v255'. Nov 23 23:21:36.406311 kernel: loop2: detected capacity change from 0 to 100632 Nov 23 23:21:36.553659 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:21:36.562966 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:21:36.618253 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 23:21:36.645007 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 23 23:21:36.679932 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 23:21:36.719918 kernel: mousedev: PS/2 mouse device common for all mice Nov 23 23:21:36.719972 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#212 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 23 23:21:36.773124 kernel: hv_vmbus: registering driver hv_balloon Nov 23 23:21:36.773176 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 23 23:21:36.777969 kernel: hv_balloon: Memory hot add disabled on ARM64 Nov 23 23:21:36.808214 kernel: hv_vmbus: registering driver hyperv_fb Nov 23 23:21:36.808264 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 23 23:21:36.812044 systemd-networkd[1479]: lo: Link UP Nov 23 23:21:36.813324 systemd-networkd[1479]: lo: Gained carrier Nov 23 23:21:36.814207 systemd-networkd[1479]: Enumeration completed Nov 23 23:21:36.814741 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:21:36.814805 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:21:36.816317 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 23 23:21:36.816589 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:21:36.829079 kernel: Console: switching to colour dummy device 80x25 Nov 23 23:21:36.829128 kernel: Console: switching to colour frame buffer device 128x48 Nov 23 23:21:36.837214 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 23:21:36.846808 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 23:21:36.855311 kernel: loop3: detected capacity change from 0 to 119840 Nov 23 23:21:36.857446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:21:36.873035 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:21:36.873326 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:21:36.880598 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:21:36.881400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:21:36.902315 kernel: mlx5_core 9b62:00:02.0 enP39778s1: Link up Nov 23 23:21:36.924308 kernel: hv_netvsc 000d3af6-002b-000d-3af6-002b000d3af6 eth0: Data path switched to VF: enP39778s1 Nov 23 23:21:36.925136 systemd-networkd[1479]: enP39778s1: Link UP Nov 23 23:21:36.925376 systemd-networkd[1479]: eth0: Link UP Nov 23 23:21:36.925431 systemd-networkd[1479]: eth0: Gained carrier Nov 23 23:21:36.925483 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:21:36.926608 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 23:21:36.933523 systemd-networkd[1479]: enP39778s1: Gained carrier Nov 23 23:21:36.950363 systemd-networkd[1479]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 23 23:21:37.006116 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 23 23:21:37.013316 kernel: MACsec IEEE 802.1AE Nov 23 23:21:37.020235 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 23:21:37.067151 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 23:21:37.213315 kernel: loop4: detected capacity change from 0 to 207008 Nov 23 23:21:37.229355 kernel: loop5: detected capacity change from 0 to 27936 Nov 23 23:21:37.241452 kernel: loop6: detected capacity change from 0 to 100632 Nov 23 23:21:37.253338 kernel: loop7: detected capacity change from 0 to 119840 Nov 23 23:21:37.268635 (sd-merge)[1608]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 23 23:21:37.268974 (sd-merge)[1608]: Merged extensions into '/usr'. Nov 23 23:21:37.271371 systemd[1]: Reload requested from client PID 1436 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 23:21:37.271470 systemd[1]: Reloading... Nov 23 23:21:37.331475 zram_generator::config[1639]: No configuration found. Nov 23 23:21:37.490121 systemd[1]: Reloading finished in 218 ms. Nov 23 23:21:37.510237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:21:37.515649 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 23:21:37.533113 systemd[1]: Starting ensure-sysext.service... Nov 23 23:21:37.539178 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:21:37.550370 systemd[1]: Reload requested from client PID 1695 ('systemctl') (unit ensure-sysext.service)... Nov 23 23:21:37.550382 systemd[1]: Reloading... Nov 23 23:21:37.601439 zram_generator::config[1723]: No configuration found. Nov 23 23:21:37.608519 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 23:21:37.609082 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 23:21:37.609288 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 23:21:37.609575 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 23:21:37.610092 systemd-tmpfiles[1696]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 23:21:37.610372 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Nov 23 23:21:37.610683 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Nov 23 23:21:37.638238 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:21:37.638353 systemd-tmpfiles[1696]: Skipping /boot Nov 23 23:21:37.644881 systemd-tmpfiles[1696]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:21:37.644893 systemd-tmpfiles[1696]: Skipping /boot Nov 23 23:21:37.755697 systemd[1]: Reloading finished in 205 ms. Nov 23 23:21:37.774023 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:21:37.785290 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:21:37.800463 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 23:21:37.806577 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 23:21:37.816559 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:21:37.823458 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 23:21:37.831817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:21:37.833554 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:21:37.842443 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:21:37.850185 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:21:37.854714 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:21:37.854800 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:21:37.856914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:21:37.858328 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:21:37.864181 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:21:37.864309 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:21:37.869975 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:21:37.870088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:21:37.877512 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:21:37.880497 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:21:37.887876 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:21:37.898332 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:21:37.903753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:21:37.903882 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:21:37.906355 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 23:21:37.912121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:21:37.912241 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:21:37.917214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:21:37.917332 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:21:37.923396 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:21:37.923519 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:21:37.930000 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 23:21:37.939545 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:21:37.940443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:21:37.945251 systemd-resolved[1788]: Positive Trust Anchors: Nov 23 23:21:37.945710 systemd-resolved[1788]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:21:37.945789 systemd-resolved[1788]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:21:37.946701 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:21:37.953050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:21:37.959456 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:21:37.963811 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:21:37.964007 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:21:37.964240 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 23:21:37.969555 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:21:37.969773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:21:37.973624 systemd-resolved[1788]: Using system hostname 'ci-4459.2.1-a-2a92a9cf5f'. Nov 23 23:21:37.975063 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:21:37.979866 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:21:37.980074 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:21:37.984833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:21:37.985034 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:21:37.990709 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:21:37.990968 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:21:37.997367 systemd[1]: Finished ensure-sysext.service. Nov 23 23:21:38.004975 systemd[1]: Reached target network.target - Network. Nov 23 23:21:38.008820 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:21:38.013479 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:21:38.013529 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:21:38.021889 augenrules[1832]: No rules Nov 23 23:21:38.022915 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:21:38.023094 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:21:38.448890 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 23:21:38.454372 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 23:21:38.866420 systemd-networkd[1479]: eth0: Gained IPv6LL Nov 23 23:21:38.868536 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 23:21:38.874233 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 23:21:41.070778 ldconfig[1431]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 23:21:41.081236 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 23:21:41.089773 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 23:21:41.101240 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 23:21:41.105963 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:21:41.110496 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 23:21:41.115592 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 23:21:41.120755 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 23:21:41.125288 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 23:21:41.130209 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 23:21:41.135210 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 23:21:41.135231 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:21:41.138876 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:21:41.146541 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 23:21:41.152001 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 23:21:41.157208 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 23:21:41.162547 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 23:21:41.167648 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 23:21:41.173341 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 23:21:41.177870 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 23:21:41.182968 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 23:21:41.187201 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:21:41.190873 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:21:41.194629 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:21:41.194652 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:21:41.196342 systemd[1]: Starting chronyd.service - NTP client/server... Nov 23 23:21:41.209382 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 23:21:41.222481 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 23 23:21:41.230530 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 23:21:41.241409 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 23:21:41.248281 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 23:21:41.254514 chronyd[1845]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Nov 23 23:21:41.254851 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 23:21:41.259395 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 23:21:41.261602 jq[1853]: false Nov 23 23:21:41.261673 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 23 23:21:41.265757 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 23 23:21:41.266410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:21:41.267424 KVP[1855]: KVP starting; pid is:1855 Nov 23 23:21:41.273048 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 23:21:41.274576 chronyd[1845]: Timezone right/UTC failed leap second check, ignoring Nov 23 23:21:41.274785 chronyd[1845]: Loaded seccomp filter (level 2) Nov 23 23:21:41.275151 KVP[1855]: KVP LIC Version: 3.1 Nov 23 23:21:41.275359 kernel: hv_utils: KVP IC version 4.0 Nov 23 23:21:41.282238 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 23:21:41.294371 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 23:21:41.299007 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 23:21:41.306203 extend-filesystems[1854]: Found /dev/sda6 Nov 23 23:21:41.315135 extend-filesystems[1854]: Found /dev/sda9 Nov 23 23:21:41.315135 extend-filesystems[1854]: Checking size of /dev/sda9 Nov 23 23:21:41.312456 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 23:21:41.335373 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 23:21:41.342444 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 23:21:41.342902 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 23:21:41.343397 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 23:21:41.349389 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 23:21:41.354768 systemd[1]: Started chronyd.service - NTP client/server. Nov 23 23:21:41.363364 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 23:21:41.363684 extend-filesystems[1854]: Old size kept for /dev/sda9 Nov 23 23:21:41.374650 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 23:21:41.381607 jq[1881]: true Nov 23 23:21:41.374792 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 23:21:41.374994 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 23:21:41.375119 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 23:21:41.390580 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 23:21:41.390728 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 23:21:41.395687 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 23:21:41.403151 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 23:21:41.404360 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 23:21:41.418226 update_engine[1879]: I20251123 23:21:41.418160 1879 main.cc:92] Flatcar Update Engine starting Nov 23 23:21:41.424770 (ntainerd)[1900]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 23:21:41.431863 jq[1899]: true Nov 23 23:21:41.439377 systemd-logind[1875]: New seat seat0. Nov 23 23:21:41.442785 systemd-logind[1875]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 23 23:21:41.442928 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 23:21:41.497902 tar[1896]: linux-arm64/LICENSE Nov 23 23:21:41.498109 tar[1896]: linux-arm64/helm Nov 23 23:21:41.520091 dbus-daemon[1848]: [system] SELinux support is enabled Nov 23 23:21:41.520215 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 23:21:41.528771 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 23:21:41.528802 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 23:21:41.537869 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 23:21:41.537917 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 23:21:41.545213 update_engine[1879]: I20251123 23:21:41.545168 1879 update_check_scheduler.cc:74] Next update check in 10m19s Nov 23 23:21:41.550039 systemd[1]: Started update-engine.service - Update Engine. Nov 23 23:21:41.554177 dbus-daemon[1848]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 23 23:21:41.562148 bash[1942]: Updated "/home/core/.ssh/authorized_keys" Nov 23 23:21:41.563345 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 23:21:41.571729 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 23:21:41.582541 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 23 23:21:41.614357 coreos-metadata[1847]: Nov 23 23:21:41.614 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 23 23:21:41.626303 coreos-metadata[1847]: Nov 23 23:21:41.625 INFO Fetch successful Nov 23 23:21:41.626303 coreos-metadata[1847]: Nov 23 23:21:41.625 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 23 23:21:41.629970 coreos-metadata[1847]: Nov 23 23:21:41.629 INFO Fetch successful Nov 23 23:21:41.630597 coreos-metadata[1847]: Nov 23 23:21:41.630 INFO Fetching http://168.63.129.16/machine/740f3c16-02c4-4590-b379-5a174bb3698b/bfa3fffc%2D2a1b%2D4584%2D98a9%2Df0416718b0c3.%5Fci%2D4459.2.1%2Da%2D2a92a9cf5f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 23 23:21:41.663687 coreos-metadata[1847]: Nov 23 23:21:41.663 INFO Fetch successful Nov 23 23:21:41.663906 coreos-metadata[1847]: Nov 23 23:21:41.663 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 23 23:21:41.673466 coreos-metadata[1847]: Nov 23 23:21:41.673 INFO Fetch successful Nov 23 23:21:41.712392 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 23 23:21:41.719582 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 23:21:41.878380 locksmithd[1982]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 23:21:42.020055 sshd_keygen[1887]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 23:21:42.024934 tar[1896]: linux-arm64/README.md Nov 23 23:21:42.039702 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 23:21:42.047362 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 23:21:42.054629 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 23:21:42.065220 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 23 23:21:42.072391 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 23:21:42.072550 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 23:21:42.081674 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 23:21:42.093888 containerd[1900]: time="2025-11-23T23:21:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 23:21:42.094402 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 23 23:21:42.095160 containerd[1900]: time="2025-11-23T23:21:42.095135600Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 23:21:42.104669 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 23:21:42.109108 containerd[1900]: time="2025-11-23T23:21:42.109061816Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.536µs" Nov 23 23:21:42.109108 containerd[1900]: time="2025-11-23T23:21:42.109084856Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 23:21:42.109108 containerd[1900]: time="2025-11-23T23:21:42.109097752Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 23:21:42.109235 containerd[1900]: time="2025-11-23T23:21:42.109216712Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 23:21:42.109235 containerd[1900]: time="2025-11-23T23:21:42.109232152Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 23:21:42.109267 containerd[1900]: time="2025-11-23T23:21:42.109249408Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:21:42.109312 containerd[1900]: time="2025-11-23T23:21:42.109288432Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:21:42.109336 containerd[1900]: time="2025-11-23T23:21:42.109310816Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:21:42.109486 containerd[1900]: time="2025-11-23T23:21:42.109466760Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:21:42.109486 containerd[1900]: time="2025-11-23T23:21:42.109482328Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:21:42.109535 containerd[1900]: time="2025-11-23T23:21:42.109490304Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:21:42.109535 containerd[1900]: time="2025-11-23T23:21:42.109496512Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 23:21:42.109564 containerd[1900]: time="2025-11-23T23:21:42.109555216Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 23:21:42.109768 containerd[1900]: time="2025-11-23T23:21:42.109690848Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:21:42.109768 containerd[1900]: time="2025-11-23T23:21:42.109713352Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:21:42.109768 containerd[1900]: time="2025-11-23T23:21:42.109720056Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 23:21:42.109768 containerd[1900]: time="2025-11-23T23:21:42.109740840Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 23:21:42.110317 containerd[1900]: time="2025-11-23T23:21:42.109873424Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 23:21:42.110317 containerd[1900]: time="2025-11-23T23:21:42.109923672Z" level=info msg="metadata content store policy set" policy=shared Nov 23 23:21:42.116240 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 23:21:42.123593 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 23 23:21:42.130467 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 23:21:42.165069 containerd[1900]: time="2025-11-23T23:21:42.164985432Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 23:21:42.165069 containerd[1900]: time="2025-11-23T23:21:42.165035352Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 23:21:42.165069 containerd[1900]: time="2025-11-23T23:21:42.165045952Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 23:21:42.165069 containerd[1900]: time="2025-11-23T23:21:42.165053688Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 23:21:42.165069 containerd[1900]: time="2025-11-23T23:21:42.165062888Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 23:21:42.165069 containerd[1900]: time="2025-11-23T23:21:42.165070496Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 23:21:42.165069 containerd[1900]: time="2025-11-23T23:21:42.165078976Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 23:21:42.165207 containerd[1900]: time="2025-11-23T23:21:42.165086496Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 23:21:42.165207 containerd[1900]: time="2025-11-23T23:21:42.165093688Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 23:21:42.165207 containerd[1900]: time="2025-11-23T23:21:42.165099880Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 23:21:42.165207 containerd[1900]: time="2025-11-23T23:21:42.165105184Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 23:21:42.165207 containerd[1900]: time="2025-11-23T23:21:42.165112696Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 23:21:42.165267 containerd[1900]: time="2025-11-23T23:21:42.165217624Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 23:21:42.165267 containerd[1900]: time="2025-11-23T23:21:42.165235408Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 23:21:42.165267 containerd[1900]: time="2025-11-23T23:21:42.165244744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 23:21:42.165267 containerd[1900]: time="2025-11-23T23:21:42.165255088Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 23:21:42.165267 containerd[1900]: time="2025-11-23T23:21:42.165262328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 23:21:42.165344 containerd[1900]: time="2025-11-23T23:21:42.165269136Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 23:21:42.165344 containerd[1900]: time="2025-11-23T23:21:42.165276632Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 23:21:42.165344 containerd[1900]: time="2025-11-23T23:21:42.165282856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 23:21:42.165344 containerd[1900]: time="2025-11-23T23:21:42.165290064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 23:21:42.165399 containerd[1900]: time="2025-11-23T23:21:42.165344536Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 23:21:42.165399 containerd[1900]: time="2025-11-23T23:21:42.165357696Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 23:21:42.165425 containerd[1900]: time="2025-11-23T23:21:42.165404280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 23:21:42.165425 containerd[1900]: time="2025-11-23T23:21:42.165415320Z" level=info msg="Start snapshots syncer" Nov 23 23:21:42.165449 containerd[1900]: time="2025-11-23T23:21:42.165430688Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 23:21:42.166012 containerd[1900]: time="2025-11-23T23:21:42.165620040Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 23:21:42.166012 containerd[1900]: time="2025-11-23T23:21:42.165663368Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165694256Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165793944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165808968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165815952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165829824Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165837520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165844400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165860552Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165879496Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165888144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165895104Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165914592Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165924592Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:21:42.166662 containerd[1900]: time="2025-11-23T23:21:42.165938272Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:21:42.166844 containerd[1900]: time="2025-11-23T23:21:42.165945592Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:21:42.166844 containerd[1900]: time="2025-11-23T23:21:42.165951000Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 23:21:42.166844 containerd[1900]: time="2025-11-23T23:21:42.165958904Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 23:21:42.166844 containerd[1900]: time="2025-11-23T23:21:42.165965984Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 23:21:42.166844 containerd[1900]: time="2025-11-23T23:21:42.165977824Z" level=info msg="runtime interface created" Nov 23 23:21:42.166844 containerd[1900]: time="2025-11-23T23:21:42.165983040Z" level=info msg="created NRI interface" Nov 23 23:21:42.166844 containerd[1900]: time="2025-11-23T23:21:42.165988440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 23:21:42.166844 containerd[1900]: time="2025-11-23T23:21:42.165995816Z" level=info msg="Connect containerd service" Nov 23 23:21:42.166844 containerd[1900]: time="2025-11-23T23:21:42.166016584Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 23:21:42.166844 containerd[1900]: time="2025-11-23T23:21:42.166617240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:21:42.247022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:21:42.251994 (kubelet)[2046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:21:42.451375 containerd[1900]: time="2025-11-23T23:21:42.451084752Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 23:21:42.451375 containerd[1900]: time="2025-11-23T23:21:42.451135408Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 23:21:42.451375 containerd[1900]: time="2025-11-23T23:21:42.451153144Z" level=info msg="Start subscribing containerd event" Nov 23 23:21:42.451375 containerd[1900]: time="2025-11-23T23:21:42.451191216Z" level=info msg="Start recovering state" Nov 23 23:21:42.451375 containerd[1900]: time="2025-11-23T23:21:42.451254096Z" level=info msg="Start event monitor" Nov 23 23:21:42.451375 containerd[1900]: time="2025-11-23T23:21:42.451263216Z" level=info msg="Start cni network conf syncer for default" Nov 23 23:21:42.451375 containerd[1900]: time="2025-11-23T23:21:42.451268600Z" level=info msg="Start streaming server" Nov 23 23:21:42.451375 containerd[1900]: time="2025-11-23T23:21:42.451274456Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 23:21:42.451375 containerd[1900]: time="2025-11-23T23:21:42.451279432Z" level=info msg="runtime interface starting up..." Nov 23 23:21:42.451375 containerd[1900]: time="2025-11-23T23:21:42.451282760Z" level=info msg="starting plugins..." Nov 23 23:21:42.453346 containerd[1900]: time="2025-11-23T23:21:42.452970256Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 23:21:42.453346 containerd[1900]: time="2025-11-23T23:21:42.453117384Z" level=info msg="containerd successfully booted in 0.359493s" Nov 23 23:21:42.453262 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 23:21:42.459526 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 23:21:42.465417 systemd[1]: Startup finished in 1.625s (kernel) + 11.581s (initrd) + 10.775s (userspace) = 23.982s. Nov 23 23:21:42.619275 kubelet[2046]: E1123 23:21:42.619215 2046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:21:42.620899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:21:42.621002 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:21:42.621435 systemd[1]: kubelet.service: Consumed 539ms CPU time, 256.4M memory peak. Nov 23 23:21:42.769613 login[2032]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 23 23:21:42.771115 login[2033]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:21:42.780794 systemd-logind[1875]: New session 1 of user core. Nov 23 23:21:42.782178 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 23:21:42.783045 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 23:21:42.799100 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 23:21:42.801048 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 23:21:42.808782 (systemd)[2063]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 23:21:42.810331 systemd-logind[1875]: New session c1 of user core. Nov 23 23:21:42.929589 systemd[2063]: Queued start job for default target default.target. Nov 23 23:21:42.936945 systemd[2063]: Created slice app.slice - User Application Slice. Nov 23 23:21:42.937061 systemd[2063]: Reached target paths.target - Paths. Nov 23 23:21:42.937095 systemd[2063]: Reached target timers.target - Timers. Nov 23 23:21:42.938047 systemd[2063]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 23:21:42.944418 systemd[2063]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 23:21:42.944461 systemd[2063]: Reached target sockets.target - Sockets. Nov 23 23:21:42.944489 systemd[2063]: Reached target basic.target - Basic System. Nov 23 23:21:42.944509 systemd[2063]: Reached target default.target - Main User Target. Nov 23 23:21:42.944527 systemd[2063]: Startup finished in 130ms. Nov 23 23:21:42.944637 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 23:21:42.950433 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 23:21:43.691051 waagent[2028]: 2025-11-23T23:21:43.690982Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Nov 23 23:21:43.695386 waagent[2028]: 2025-11-23T23:21:43.695350Z INFO Daemon Daemon OS: flatcar 4459.2.1 Nov 23 23:21:43.698719 waagent[2028]: 2025-11-23T23:21:43.698691Z INFO Daemon Daemon Python: 3.11.13 Nov 23 23:21:43.701941 waagent[2028]: 2025-11-23T23:21:43.701892Z INFO Daemon Daemon Run daemon Nov 23 23:21:43.704894 waagent[2028]: 2025-11-23T23:21:43.704865Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.1' Nov 23 23:21:43.711871 waagent[2028]: 2025-11-23T23:21:43.711833Z INFO Daemon Daemon Using waagent for provisioning Nov 23 23:21:43.715667 waagent[2028]: 2025-11-23T23:21:43.715637Z INFO Daemon Daemon Activate resource disk Nov 23 23:21:43.718996 waagent[2028]: 2025-11-23T23:21:43.718969Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 23 23:21:43.726975 waagent[2028]: 2025-11-23T23:21:43.726944Z INFO Daemon Daemon Found device: None Nov 23 23:21:43.730516 waagent[2028]: 2025-11-23T23:21:43.730484Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 23 23:21:43.736736 waagent[2028]: 2025-11-23T23:21:43.736709Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 23 23:21:43.745283 waagent[2028]: 2025-11-23T23:21:43.745248Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 23 23:21:43.749456 waagent[2028]: 2025-11-23T23:21:43.749427Z INFO Daemon Daemon Running default provisioning handler Nov 23 23:21:43.757676 waagent[2028]: 2025-11-23T23:21:43.757643Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 23 23:21:43.767522 waagent[2028]: 2025-11-23T23:21:43.767487Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 23 23:21:43.769926 login[2032]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:21:43.774587 waagent[2028]: 2025-11-23T23:21:43.774438Z INFO Daemon Daemon cloud-init is enabled: False Nov 23 23:21:43.778339 waagent[2028]: 2025-11-23T23:21:43.778282Z INFO Daemon Daemon Copying ovf-env.xml Nov 23 23:21:43.781575 systemd-logind[1875]: New session 2 of user core. Nov 23 23:21:43.785406 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 23:21:43.871322 waagent[2028]: 2025-11-23T23:21:43.869938Z INFO Daemon Daemon Successfully mounted dvd Nov 23 23:21:43.894725 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 23 23:21:43.896034 waagent[2028]: 2025-11-23T23:21:43.895992Z INFO Daemon Daemon Detect protocol endpoint Nov 23 23:21:43.899584 waagent[2028]: 2025-11-23T23:21:43.899553Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 23 23:21:43.903736 waagent[2028]: 2025-11-23T23:21:43.903710Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 23 23:21:43.908533 waagent[2028]: 2025-11-23T23:21:43.908507Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 23 23:21:43.912697 waagent[2028]: 2025-11-23T23:21:43.912670Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 23 23:21:43.916488 waagent[2028]: 2025-11-23T23:21:43.916465Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 23 23:21:43.959383 waagent[2028]: 2025-11-23T23:21:43.959326Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 23 23:21:43.964082 waagent[2028]: 2025-11-23T23:21:43.964062Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 23 23:21:43.967898 waagent[2028]: 2025-11-23T23:21:43.967874Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 23 23:21:44.048613 waagent[2028]: 2025-11-23T23:21:44.048554Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 23 23:21:44.053396 waagent[2028]: 2025-11-23T23:21:44.053364Z INFO Daemon Daemon Forcing an update of the goal state. Nov 23 23:21:44.060753 waagent[2028]: 2025-11-23T23:21:44.060718Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 23 23:21:44.076474 waagent[2028]: 2025-11-23T23:21:44.076444Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 23 23:21:44.080748 waagent[2028]: 2025-11-23T23:21:44.080718Z INFO Daemon Nov 23 23:21:44.082977 waagent[2028]: 2025-11-23T23:21:44.082950Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: de3754be-5559-4b24-a274-487a263e286a eTag: 6679468027091410933 source: Fabric] Nov 23 23:21:44.091878 waagent[2028]: 2025-11-23T23:21:44.091848Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 23 23:21:44.096605 waagent[2028]: 2025-11-23T23:21:44.096576Z INFO Daemon Nov 23 23:21:44.098615 waagent[2028]: 2025-11-23T23:21:44.098589Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 23 23:21:44.109411 waagent[2028]: 2025-11-23T23:21:44.109384Z INFO Daemon Daemon Downloading artifacts profile blob Nov 23 23:21:44.161992 waagent[2028]: 2025-11-23T23:21:44.161947Z INFO Daemon Downloaded certificate {'thumbprint': '0A947E6EEEC2022CEC7AE9EAA33762831CBC33D2', 'hasPrivateKey': True} Nov 23 23:21:44.169355 waagent[2028]: 2025-11-23T23:21:44.169322Z INFO Daemon Fetch goal state completed Nov 23 23:21:44.177954 waagent[2028]: 2025-11-23T23:21:44.177927Z INFO Daemon Daemon Starting provisioning Nov 23 23:21:44.182035 waagent[2028]: 2025-11-23T23:21:44.182004Z INFO Daemon Daemon Handle ovf-env.xml. Nov 23 23:21:44.185412 waagent[2028]: 2025-11-23T23:21:44.185389Z INFO Daemon Daemon Set hostname [ci-4459.2.1-a-2a92a9cf5f] Nov 23 23:21:44.191345 waagent[2028]: 2025-11-23T23:21:44.191311Z INFO Daemon Daemon Publish hostname [ci-4459.2.1-a-2a92a9cf5f] Nov 23 23:21:44.195945 waagent[2028]: 2025-11-23T23:21:44.195902Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 23 23:21:44.204196 waagent[2028]: 2025-11-23T23:21:44.200713Z INFO Daemon Daemon Primary interface is [eth0] Nov 23 23:21:44.209824 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:21:44.209829 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:21:44.209867 systemd-networkd[1479]: eth0: DHCP lease lost Nov 23 23:21:44.210731 waagent[2028]: 2025-11-23T23:21:44.210689Z INFO Daemon Daemon Create user account if not exists Nov 23 23:21:44.214678 waagent[2028]: 2025-11-23T23:21:44.214647Z INFO Daemon Daemon User core already exists, skip useradd Nov 23 23:21:44.218778 waagent[2028]: 2025-11-23T23:21:44.218746Z INFO Daemon Daemon Configure sudoer Nov 23 23:21:44.225456 waagent[2028]: 2025-11-23T23:21:44.225389Z INFO Daemon Daemon Configure sshd Nov 23 23:21:44.231332 systemd-networkd[1479]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 23 23:21:44.231679 waagent[2028]: 2025-11-23T23:21:44.231619Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 23 23:21:44.241056 waagent[2028]: 2025-11-23T23:21:44.241024Z INFO Daemon Daemon Deploy ssh public key. Nov 23 23:21:45.310030 waagent[2028]: 2025-11-23T23:21:45.309970Z INFO Daemon Daemon Provisioning complete Nov 23 23:21:45.322736 waagent[2028]: 2025-11-23T23:21:45.322701Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 23 23:21:45.327333 waagent[2028]: 2025-11-23T23:21:45.327300Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 23 23:21:45.334046 waagent[2028]: 2025-11-23T23:21:45.334018Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Nov 23 23:21:45.431986 waagent[2113]: 2025-11-23T23:21:45.431931Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Nov 23 23:21:45.432209 waagent[2113]: 2025-11-23T23:21:45.432028Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.1 Nov 23 23:21:45.432209 waagent[2113]: 2025-11-23T23:21:45.432068Z INFO ExtHandler ExtHandler Python: 3.11.13 Nov 23 23:21:45.432209 waagent[2113]: 2025-11-23T23:21:45.432103Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Nov 23 23:21:45.454717 waagent[2113]: 2025-11-23T23:21:45.454672Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Nov 23 23:21:45.454832 waagent[2113]: 2025-11-23T23:21:45.454805Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 23 23:21:45.454869 waagent[2113]: 2025-11-23T23:21:45.454852Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 23 23:21:45.459880 waagent[2113]: 2025-11-23T23:21:45.459836Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 23 23:21:45.464225 waagent[2113]: 2025-11-23T23:21:45.464194Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 23 23:21:45.464577 waagent[2113]: 2025-11-23T23:21:45.464546Z INFO ExtHandler Nov 23 23:21:45.464632 waagent[2113]: 2025-11-23T23:21:45.464612Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1c26eecb-e414-44d7-bd41-4d9892860e43 eTag: 6679468027091410933 source: Fabric] Nov 23 23:21:45.464844 waagent[2113]: 2025-11-23T23:21:45.464818Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 23 23:21:45.465213 waagent[2113]: 2025-11-23T23:21:45.465184Z INFO ExtHandler Nov 23 23:21:45.465249 waagent[2113]: 2025-11-23T23:21:45.465233Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 23 23:21:45.468219 waagent[2113]: 2025-11-23T23:21:45.468194Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 23 23:21:45.515230 waagent[2113]: 2025-11-23T23:21:45.515177Z INFO ExtHandler Downloaded certificate {'thumbprint': '0A947E6EEEC2022CEC7AE9EAA33762831CBC33D2', 'hasPrivateKey': True} Nov 23 23:21:45.515578 waagent[2113]: 2025-11-23T23:21:45.515545Z INFO ExtHandler Fetch goal state completed Nov 23 23:21:45.527362 waagent[2113]: 2025-11-23T23:21:45.527313Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Nov 23 23:21:45.535793 waagent[2113]: 2025-11-23T23:21:45.535749Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2113 Nov 23 23:21:45.535895 waagent[2113]: 2025-11-23T23:21:45.535867Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 23 23:21:45.536121 waagent[2113]: 2025-11-23T23:21:45.536095Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Nov 23 23:21:45.537175 waagent[2113]: 2025-11-23T23:21:45.537142Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.1', '', 'Flatcar Container Linux by Kinvolk'] Nov 23 23:21:45.537510 waagent[2113]: 2025-11-23T23:21:45.537480Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Nov 23 23:21:45.537640 waagent[2113]: 2025-11-23T23:21:45.537615Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 23 23:21:45.538062 waagent[2113]: 2025-11-23T23:21:45.538029Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 23 23:21:45.573915 waagent[2113]: 2025-11-23T23:21:45.573845Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 23 23:21:45.574024 waagent[2113]: 2025-11-23T23:21:45.573996Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 23 23:21:45.578198 waagent[2113]: 2025-11-23T23:21:45.578176Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 23 23:21:45.582645 systemd[1]: Reload requested from client PID 2128 ('systemctl') (unit waagent.service)... Nov 23 23:21:45.582658 systemd[1]: Reloading... Nov 23 23:21:45.654325 zram_generator::config[2163]: No configuration found. Nov 23 23:21:45.799763 systemd[1]: Reloading finished in 216 ms. Nov 23 23:21:45.815804 waagent[2113]: 2025-11-23T23:21:45.815534Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 23 23:21:45.815804 waagent[2113]: 2025-11-23T23:21:45.815663Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 23 23:21:46.450220 waagent[2113]: 2025-11-23T23:21:46.449464Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 23 23:21:46.450220 waagent[2113]: 2025-11-23T23:21:46.449768Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 23 23:21:46.450552 waagent[2113]: 2025-11-23T23:21:46.450417Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 23 23:21:46.450552 waagent[2113]: 2025-11-23T23:21:46.450479Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 23 23:21:46.450658 waagent[2113]: 2025-11-23T23:21:46.450624Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 23 23:21:46.450744 waagent[2113]: 2025-11-23T23:21:46.450701Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 23 23:21:46.450847 waagent[2113]: 2025-11-23T23:21:46.450819Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 23 23:21:46.450847 waagent[2113]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 23 23:21:46.450847 waagent[2113]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Nov 23 23:21:46.450847 waagent[2113]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 23 23:21:46.450847 waagent[2113]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 23 23:21:46.450847 waagent[2113]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 23 23:21:46.450847 waagent[2113]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 23 23:21:46.451271 waagent[2113]: 2025-11-23T23:21:46.451241Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 23 23:21:46.451441 waagent[2113]: 2025-11-23T23:21:46.451416Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 23 23:21:46.451668 waagent[2113]: 2025-11-23T23:21:46.451644Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 23 23:21:46.451777 waagent[2113]: 2025-11-23T23:21:46.451740Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 23 23:21:46.451936 waagent[2113]: 2025-11-23T23:21:46.451908Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 23 23:21:46.452243 waagent[2113]: 2025-11-23T23:21:46.452184Z INFO EnvHandler ExtHandler Configure routes Nov 23 23:21:46.452343 waagent[2113]: 2025-11-23T23:21:46.452287Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 23 23:21:46.452438 waagent[2113]: 2025-11-23T23:21:46.452412Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 23 23:21:46.452538 waagent[2113]: 2025-11-23T23:21:46.452500Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 23 23:21:46.452699 waagent[2113]: 2025-11-23T23:21:46.452675Z INFO EnvHandler ExtHandler Gateway:None Nov 23 23:21:46.453039 waagent[2113]: 2025-11-23T23:21:46.453013Z INFO EnvHandler ExtHandler Routes:None Nov 23 23:21:46.459946 waagent[2113]: 2025-11-23T23:21:46.458761Z INFO ExtHandler ExtHandler Nov 23 23:21:46.459946 waagent[2113]: 2025-11-23T23:21:46.458818Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 47958392-43ae-4ba3-b457-185c2ca8e9d1 correlation ecde4ed6-1609-44ae-a2dd-2384baaa6b7b created: 2025-11-23T23:20:50.161522Z] Nov 23 23:21:46.459946 waagent[2113]: 2025-11-23T23:21:46.459046Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 23 23:21:46.459946 waagent[2113]: 2025-11-23T23:21:46.459457Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Nov 23 23:21:46.480667 waagent[2113]: 2025-11-23T23:21:46.480634Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Nov 23 23:21:46.480667 waagent[2113]: Try `iptables -h' or 'iptables --help' for more information.) Nov 23 23:21:46.481020 waagent[2113]: 2025-11-23T23:21:46.480993Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 97E74429-42EB-4FFB-94D2-C4F829556173;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Nov 23 23:21:46.494055 waagent[2113]: 2025-11-23T23:21:46.494015Z INFO MonitorHandler ExtHandler Network interfaces: Nov 23 23:21:46.494055 waagent[2113]: Executing ['ip', '-a', '-o', 'link']: Nov 23 23:21:46.494055 waagent[2113]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 23 23:21:46.494055 waagent[2113]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:00:2b brd ff:ff:ff:ff:ff:ff Nov 23 23:21:46.494055 waagent[2113]: 3: enP39778s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f6:00:2b brd ff:ff:ff:ff:ff:ff\ altname enP39778p0s2 Nov 23 23:21:46.494055 waagent[2113]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 23 23:21:46.494055 waagent[2113]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 23 23:21:46.494055 waagent[2113]: 2: eth0 inet 10.200.20.35/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 23 23:21:46.494055 waagent[2113]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 23 23:21:46.494055 waagent[2113]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 23 23:21:46.494055 waagent[2113]: 2: eth0 inet6 fe80::20d:3aff:fef6:2b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 23 23:21:46.566352 waagent[2113]: 2025-11-23T23:21:46.566218Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Nov 23 23:21:46.566352 waagent[2113]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 23 23:21:46.566352 waagent[2113]: pkts bytes target prot opt in out source destination Nov 23 23:21:46.566352 waagent[2113]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 23 23:21:46.566352 waagent[2113]: pkts bytes target prot opt in out source destination Nov 23 23:21:46.566352 waagent[2113]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 23 23:21:46.566352 waagent[2113]: pkts bytes target prot opt in out source destination Nov 23 23:21:46.566352 waagent[2113]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 23 23:21:46.566352 waagent[2113]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 23 23:21:46.566352 waagent[2113]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 23 23:21:46.568482 waagent[2113]: 2025-11-23T23:21:46.568441Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 23 23:21:46.568482 waagent[2113]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 23 23:21:46.568482 waagent[2113]: pkts bytes target prot opt in out source destination Nov 23 23:21:46.568482 waagent[2113]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 23 23:21:46.568482 waagent[2113]: pkts bytes target prot opt in out source destination Nov 23 23:21:46.568482 waagent[2113]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 23 23:21:46.568482 waagent[2113]: pkts bytes target prot opt in out source destination Nov 23 23:21:46.568482 waagent[2113]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 23 23:21:46.568482 waagent[2113]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 23 23:21:46.568482 waagent[2113]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 23 23:21:46.568661 waagent[2113]: 2025-11-23T23:21:46.568636Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 23 23:21:52.645853 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 23:21:52.647079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:21:52.740405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:21:52.751492 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:21:52.873860 kubelet[2262]: E1123 23:21:52.873810 2262 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:21:52.876713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:21:52.876816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:21:52.877263 systemd[1]: kubelet.service: Consumed 106ms CPU time, 105.6M memory peak. Nov 23 23:22:02.896088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 23:22:02.897361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:22:03.247813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:22:03.252608 (kubelet)[2277]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:22:03.278775 kubelet[2277]: E1123 23:22:03.278741 2277 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:22:03.280570 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:22:03.280671 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:22:03.280917 systemd[1]: kubelet.service: Consumed 100ms CPU time, 107.2M memory peak. Nov 23 23:22:05.074168 chronyd[1845]: Selected source PHC0 Nov 23 23:22:06.429413 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 23:22:06.430644 systemd[1]: Started sshd@0-10.200.20.35:22-10.200.16.10:53434.service - OpenSSH per-connection server daemon (10.200.16.10:53434). Nov 23 23:22:07.006218 sshd[2284]: Accepted publickey for core from 10.200.16.10 port 53434 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:07.007244 sshd-session[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:07.010615 systemd-logind[1875]: New session 3 of user core. Nov 23 23:22:07.021411 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 23:22:07.419550 systemd[1]: Started sshd@1-10.200.20.35:22-10.200.16.10:53444.service - OpenSSH per-connection server daemon (10.200.16.10:53444). Nov 23 23:22:07.874345 sshd[2290]: Accepted publickey for core from 10.200.16.10 port 53444 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:07.875395 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:07.878718 systemd-logind[1875]: New session 4 of user core. Nov 23 23:22:07.888404 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 23:22:08.196334 sshd[2293]: Connection closed by 10.200.16.10 port 53444 Nov 23 23:22:08.196733 sshd-session[2290]: pam_unix(sshd:session): session closed for user core Nov 23 23:22:08.199459 systemd-logind[1875]: Session 4 logged out. Waiting for processes to exit. Nov 23 23:22:08.200054 systemd[1]: sshd@1-10.200.20.35:22-10.200.16.10:53444.service: Deactivated successfully. Nov 23 23:22:08.202540 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 23:22:08.203518 systemd-logind[1875]: Removed session 4. Nov 23 23:22:08.272435 systemd[1]: Started sshd@2-10.200.20.35:22-10.200.16.10:53446.service - OpenSSH per-connection server daemon (10.200.16.10:53446). Nov 23 23:22:08.699560 sshd[2299]: Accepted publickey for core from 10.200.16.10 port 53446 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:08.700573 sshd-session[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:08.704278 systemd-logind[1875]: New session 5 of user core. Nov 23 23:22:08.710395 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 23:22:09.010609 sshd[2302]: Connection closed by 10.200.16.10 port 53446 Nov 23 23:22:09.010983 sshd-session[2299]: pam_unix(sshd:session): session closed for user core Nov 23 23:22:09.013543 systemd-logind[1875]: Session 5 logged out. Waiting for processes to exit. Nov 23 23:22:09.013628 systemd[1]: sshd@2-10.200.20.35:22-10.200.16.10:53446.service: Deactivated successfully. Nov 23 23:22:09.014849 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 23:22:09.016719 systemd-logind[1875]: Removed session 5. Nov 23 23:22:09.099411 systemd[1]: Started sshd@3-10.200.20.35:22-10.200.16.10:53458.service - OpenSSH per-connection server daemon (10.200.16.10:53458). Nov 23 23:22:09.559466 sshd[2308]: Accepted publickey for core from 10.200.16.10 port 53458 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:09.561482 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:09.565107 systemd-logind[1875]: New session 6 of user core. Nov 23 23:22:09.571400 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 23:22:09.886711 sshd[2311]: Connection closed by 10.200.16.10 port 53458 Nov 23 23:22:09.887102 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Nov 23 23:22:09.889686 systemd[1]: sshd@3-10.200.20.35:22-10.200.16.10:53458.service: Deactivated successfully. Nov 23 23:22:09.890911 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 23:22:09.891582 systemd-logind[1875]: Session 6 logged out. Waiting for processes to exit. Nov 23 23:22:09.892753 systemd-logind[1875]: Removed session 6. Nov 23 23:22:09.970310 systemd[1]: Started sshd@4-10.200.20.35:22-10.200.16.10:40938.service - OpenSSH per-connection server daemon (10.200.16.10:40938). Nov 23 23:22:10.416412 sshd[2317]: Accepted publickey for core from 10.200.16.10 port 40938 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:10.417439 sshd-session[2317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:10.421170 systemd-logind[1875]: New session 7 of user core. Nov 23 23:22:10.427405 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 23:22:10.796140 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 23:22:10.796390 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:22:10.825865 sudo[2321]: pam_unix(sudo:session): session closed for user root Nov 23 23:22:10.896712 sshd[2320]: Connection closed by 10.200.16.10 port 40938 Nov 23 23:22:10.897227 sshd-session[2317]: pam_unix(sshd:session): session closed for user core Nov 23 23:22:10.900367 systemd[1]: sshd@4-10.200.20.35:22-10.200.16.10:40938.service: Deactivated successfully. Nov 23 23:22:10.901579 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 23:22:10.902088 systemd-logind[1875]: Session 7 logged out. Waiting for processes to exit. Nov 23 23:22:10.903109 systemd-logind[1875]: Removed session 7. Nov 23 23:22:10.986623 systemd[1]: Started sshd@5-10.200.20.35:22-10.200.16.10:40952.service - OpenSSH per-connection server daemon (10.200.16.10:40952). Nov 23 23:22:11.448876 sshd[2327]: Accepted publickey for core from 10.200.16.10 port 40952 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:11.449909 sshd-session[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:11.453434 systemd-logind[1875]: New session 8 of user core. Nov 23 23:22:11.460506 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 23:22:11.708679 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 23:22:11.708882 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:22:11.714717 sudo[2332]: pam_unix(sudo:session): session closed for user root Nov 23 23:22:11.717937 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 23:22:11.718113 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:22:11.725167 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:22:11.754515 augenrules[2354]: No rules Nov 23 23:22:11.755491 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:22:11.755752 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:22:11.757085 sudo[2331]: pam_unix(sudo:session): session closed for user root Nov 23 23:22:11.823904 sshd[2330]: Connection closed by 10.200.16.10 port 40952 Nov 23 23:22:11.824521 sshd-session[2327]: pam_unix(sshd:session): session closed for user core Nov 23 23:22:11.828044 systemd[1]: sshd@5-10.200.20.35:22-10.200.16.10:40952.service: Deactivated successfully. Nov 23 23:22:11.829207 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 23:22:11.830815 systemd-logind[1875]: Session 8 logged out. Waiting for processes to exit. Nov 23 23:22:11.831664 systemd-logind[1875]: Removed session 8. Nov 23 23:22:11.908072 systemd[1]: Started sshd@6-10.200.20.35:22-10.200.16.10:40960.service - OpenSSH per-connection server daemon (10.200.16.10:40960). Nov 23 23:22:12.371093 sshd[2363]: Accepted publickey for core from 10.200.16.10 port 40960 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:22:12.372103 sshd-session[2363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:22:12.375488 systemd-logind[1875]: New session 9 of user core. Nov 23 23:22:12.384412 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 23:22:12.631892 sudo[2367]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 23:22:12.632105 sudo[2367]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:22:13.395846 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 23 23:22:13.397018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:22:13.747147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:22:13.749609 (kubelet)[2392]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:22:13.777466 kubelet[2392]: E1123 23:22:13.777418 2392 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:22:13.779253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:22:13.779461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:22:13.779865 systemd[1]: kubelet.service: Consumed 102ms CPU time, 105.2M memory peak. Nov 23 23:22:14.351505 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 23:22:14.359525 (dockerd)[2399]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 23:22:15.118109 dockerd[2399]: time="2025-11-23T23:22:15.118060310Z" level=info msg="Starting up" Nov 23 23:22:15.118740 dockerd[2399]: time="2025-11-23T23:22:15.118701936Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 23:22:15.126219 dockerd[2399]: time="2025-11-23T23:22:15.126193795Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 23:22:15.257712 dockerd[2399]: time="2025-11-23T23:22:15.257683817Z" level=info msg="Loading containers: start." Nov 23 23:22:15.314309 kernel: Initializing XFRM netlink socket Nov 23 23:22:15.625094 systemd-networkd[1479]: docker0: Link UP Nov 23 23:22:15.644601 dockerd[2399]: time="2025-11-23T23:22:15.644530988Z" level=info msg="Loading containers: done." Nov 23 23:22:15.653641 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2194222894-merged.mount: Deactivated successfully. Nov 23 23:22:15.664193 dockerd[2399]: time="2025-11-23T23:22:15.664123708Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 23:22:15.664326 dockerd[2399]: time="2025-11-23T23:22:15.664181790Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 23:22:15.664449 dockerd[2399]: time="2025-11-23T23:22:15.664435285Z" level=info msg="Initializing buildkit" Nov 23 23:22:15.713076 dockerd[2399]: time="2025-11-23T23:22:15.713017821Z" level=info msg="Completed buildkit initialization" Nov 23 23:22:15.718582 dockerd[2399]: time="2025-11-23T23:22:15.718553736Z" level=info msg="Daemon has completed initialization" Nov 23 23:22:15.718786 dockerd[2399]: time="2025-11-23T23:22:15.718638579Z" level=info msg="API listen on /run/docker.sock" Nov 23 23:22:15.718936 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 23:22:16.575285 containerd[1900]: time="2025-11-23T23:22:16.575192466Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Nov 23 23:22:17.433058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1691657383.mount: Deactivated successfully. Nov 23 23:22:18.837571 containerd[1900]: time="2025-11-23T23:22:18.837519922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:18.840011 containerd[1900]: time="2025-11-23T23:22:18.839991095Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26431959" Nov 23 23:22:18.843784 containerd[1900]: time="2025-11-23T23:22:18.843746841Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:18.847657 containerd[1900]: time="2025-11-23T23:22:18.847625286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:18.848365 containerd[1900]: time="2025-11-23T23:22:18.848339642Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 2.273093295s" Nov 23 23:22:18.848421 containerd[1900]: time="2025-11-23T23:22:18.848369723Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Nov 23 23:22:18.849060 containerd[1900]: time="2025-11-23T23:22:18.848935683Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Nov 23 23:22:20.356994 containerd[1900]: time="2025-11-23T23:22:20.356943974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:20.359895 containerd[1900]: time="2025-11-23T23:22:20.359873160Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618955" Nov 23 23:22:20.363273 containerd[1900]: time="2025-11-23T23:22:20.363239423Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:20.367719 containerd[1900]: time="2025-11-23T23:22:20.367685668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:20.368920 containerd[1900]: time="2025-11-23T23:22:20.368881654Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.519813983s" Nov 23 23:22:20.369019 containerd[1900]: time="2025-11-23T23:22:20.368907007Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Nov 23 23:22:20.369416 containerd[1900]: time="2025-11-23T23:22:20.369393308Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Nov 23 23:22:21.904301 containerd[1900]: time="2025-11-23T23:22:21.904224372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:21.907136 containerd[1900]: time="2025-11-23T23:22:21.907112592Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618436" Nov 23 23:22:21.910386 containerd[1900]: time="2025-11-23T23:22:21.910349590Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:21.914632 containerd[1900]: time="2025-11-23T23:22:21.914584524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:21.915271 containerd[1900]: time="2025-11-23T23:22:21.915139279Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.545718545s" Nov 23 23:22:21.915271 containerd[1900]: time="2025-11-23T23:22:21.915164152Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Nov 23 23:22:21.915561 containerd[1900]: time="2025-11-23T23:22:21.915529746Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Nov 23 23:22:23.002958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806583772.mount: Deactivated successfully. Nov 23 23:22:23.286690 containerd[1900]: time="2025-11-23T23:22:23.286575264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:23.290066 containerd[1900]: time="2025-11-23T23:22:23.290041305Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561799" Nov 23 23:22:23.293512 containerd[1900]: time="2025-11-23T23:22:23.293488128Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:23.298128 containerd[1900]: time="2025-11-23T23:22:23.298103585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:23.298483 containerd[1900]: time="2025-11-23T23:22:23.298356189Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.382801874s" Nov 23 23:22:23.298483 containerd[1900]: time="2025-11-23T23:22:23.298377718Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Nov 23 23:22:23.298763 containerd[1900]: time="2025-11-23T23:22:23.298742144Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 23 23:22:23.895820 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 23 23:22:23.897597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:22:23.942838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1246224624.mount: Deactivated successfully. Nov 23 23:22:24.474381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:22:24.477205 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:22:24.503098 kubelet[2693]: E1123 23:22:24.503065 2693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:22:24.505249 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:22:24.505437 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:22:24.505803 systemd[1]: kubelet.service: Consumed 102ms CPU time, 106.9M memory peak. Nov 23 23:22:24.938171 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Nov 23 23:22:26.083842 containerd[1900]: time="2025-11-23T23:22:26.083790920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:26.086552 containerd[1900]: time="2025-11-23T23:22:26.086529930Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Nov 23 23:22:26.089703 containerd[1900]: time="2025-11-23T23:22:26.089678385Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:26.095413 containerd[1900]: time="2025-11-23T23:22:26.095370907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:26.095815 containerd[1900]: time="2025-11-23T23:22:26.095665764Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.796896515s" Nov 23 23:22:26.095815 containerd[1900]: time="2025-11-23T23:22:26.095692517Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 23 23:22:26.096276 containerd[1900]: time="2025-11-23T23:22:26.096250246Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 23 23:22:26.494940 update_engine[1879]: I20251123 23:22:26.494491 1879 update_attempter.cc:509] Updating boot flags... Nov 23 23:22:26.719181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount259663793.mount: Deactivated successfully. Nov 23 23:22:26.739288 containerd[1900]: time="2025-11-23T23:22:26.738866467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:22:26.741424 containerd[1900]: time="2025-11-23T23:22:26.741407768Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Nov 23 23:22:26.744194 containerd[1900]: time="2025-11-23T23:22:26.744175539Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:22:26.747898 containerd[1900]: time="2025-11-23T23:22:26.747832600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:22:26.748224 containerd[1900]: time="2025-11-23T23:22:26.748091024Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 651.819042ms" Nov 23 23:22:26.748224 containerd[1900]: time="2025-11-23T23:22:26.748120257Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 23 23:22:26.748582 containerd[1900]: time="2025-11-23T23:22:26.748559454Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 23 23:22:27.435798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount56077691.mount: Deactivated successfully. Nov 23 23:22:30.402942 containerd[1900]: time="2025-11-23T23:22:30.402420206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:30.957397 containerd[1900]: time="2025-11-23T23:22:30.957134260Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Nov 23 23:22:30.960531 containerd[1900]: time="2025-11-23T23:22:30.960505621Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:30.964245 containerd[1900]: time="2025-11-23T23:22:30.964215433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:30.965177 containerd[1900]: time="2025-11-23T23:22:30.964899768Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 4.216312776s" Nov 23 23:22:30.965177 containerd[1900]: time="2025-11-23T23:22:30.964922632Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 23 23:22:33.586696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:22:33.586810 systemd[1]: kubelet.service: Consumed 102ms CPU time, 106.9M memory peak. Nov 23 23:22:33.589247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:22:33.609246 systemd[1]: Reload requested from client PID 2896 ('systemctl') (unit session-9.scope)... Nov 23 23:22:33.609360 systemd[1]: Reloading... Nov 23 23:22:33.699326 zram_generator::config[2952]: No configuration found. Nov 23 23:22:33.841802 systemd[1]: Reloading finished in 232 ms. Nov 23 23:22:33.889629 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 23 23:22:33.889807 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 23 23:22:33.890346 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:22:33.890378 systemd[1]: kubelet.service: Consumed 71ms CPU time, 95M memory peak. Nov 23 23:22:33.893466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:22:34.139238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:22:34.144506 (kubelet)[3010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:22:34.169929 kubelet[3010]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:22:34.169929 kubelet[3010]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:22:34.169929 kubelet[3010]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:22:34.170152 kubelet[3010]: I1123 23:22:34.169970 3010 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:22:34.485044 kubelet[3010]: I1123 23:22:34.485007 3010 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 23:22:34.485044 kubelet[3010]: I1123 23:22:34.485037 3010 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:22:34.485253 kubelet[3010]: I1123 23:22:34.485234 3010 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 23:22:34.500045 kubelet[3010]: E1123 23:22:34.499711 3010 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:22:34.500411 kubelet[3010]: I1123 23:22:34.500385 3010 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:22:34.505997 kubelet[3010]: I1123 23:22:34.505982 3010 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:22:34.509229 kubelet[3010]: I1123 23:22:34.509213 3010 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:22:34.510607 kubelet[3010]: I1123 23:22:34.510578 3010 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:22:34.510795 kubelet[3010]: I1123 23:22:34.510673 3010 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.1-a-2a92a9cf5f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:22:34.510925 kubelet[3010]: I1123 23:22:34.510914 3010 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:22:34.511204 kubelet[3010]: I1123 23:22:34.510966 3010 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 23:22:34.511204 kubelet[3010]: I1123 23:22:34.511069 3010 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:22:34.513562 kubelet[3010]: I1123 23:22:34.513547 3010 kubelet.go:446] "Attempting to sync node with API server" Nov 23 23:22:34.513731 kubelet[3010]: I1123 23:22:34.513717 3010 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:22:34.513800 kubelet[3010]: I1123 23:22:34.513793 3010 kubelet.go:352] "Adding apiserver pod source" Nov 23 23:22:34.513848 kubelet[3010]: I1123 23:22:34.513840 3010 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:22:34.516434 kubelet[3010]: W1123 23:22:34.515199 3010 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.1-a-2a92a9cf5f&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Nov 23 23:22:34.516510 kubelet[3010]: E1123 23:22:34.516444 3010 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.1-a-2a92a9cf5f&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:22:34.517501 kubelet[3010]: I1123 23:22:34.516660 3010 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:22:34.517501 kubelet[3010]: I1123 23:22:34.516926 3010 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 23:22:34.517501 kubelet[3010]: W1123 23:22:34.516963 3010 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 23:22:34.517501 kubelet[3010]: I1123 23:22:34.517355 3010 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:22:34.517501 kubelet[3010]: I1123 23:22:34.517378 3010 server.go:1287] "Started kubelet" Nov 23 23:22:34.521224 kubelet[3010]: W1123 23:22:34.521185 3010 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Nov 23 23:22:34.521224 kubelet[3010]: E1123 23:22:34.521218 3010 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:22:34.521350 kubelet[3010]: E1123 23:22:34.521261 3010 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.35:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.1-a-2a92a9cf5f.187ac6398b0e1bd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.1-a-2a92a9cf5f,UID:ci-4459.2.1-a-2a92a9cf5f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.1-a-2a92a9cf5f,},FirstTimestamp:2025-11-23 23:22:34.517363664 +0000 UTC m=+0.370551289,LastTimestamp:2025-11-23 23:22:34.517363664 +0000 UTC m=+0.370551289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.1-a-2a92a9cf5f,}" Nov 23 23:22:34.521920 kubelet[3010]: I1123 23:22:34.521903 3010 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:22:34.522883 kubelet[3010]: E1123 23:22:34.522866 3010 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:22:34.523910 kubelet[3010]: I1123 23:22:34.523693 3010 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:22:34.524211 kubelet[3010]: I1123 23:22:34.524191 3010 server.go:479] "Adding debug handlers to kubelet server" Nov 23 23:22:34.525236 kubelet[3010]: I1123 23:22:34.525191 3010 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:22:34.525389 kubelet[3010]: I1123 23:22:34.525372 3010 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:22:34.525572 kubelet[3010]: I1123 23:22:34.525553 3010 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:22:34.526057 kubelet[3010]: I1123 23:22:34.525914 3010 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:22:34.526057 kubelet[3010]: I1123 23:22:34.525965 3010 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:22:34.526057 kubelet[3010]: I1123 23:22:34.526016 3010 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:22:34.526397 kubelet[3010]: W1123 23:22:34.526250 3010 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Nov 23 23:22:34.526397 kubelet[3010]: E1123 23:22:34.526277 3010 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:22:34.526595 kubelet[3010]: I1123 23:22:34.526576 3010 factory.go:221] Registration of the systemd container factory successfully Nov 23 23:22:34.526667 kubelet[3010]: I1123 23:22:34.526639 3010 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:22:34.528314 kubelet[3010]: E1123 23:22:34.528274 3010 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.1-a-2a92a9cf5f\" not found" Nov 23 23:22:34.528447 kubelet[3010]: E1123 23:22:34.528426 3010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-a-2a92a9cf5f?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="200ms" Nov 23 23:22:34.528520 kubelet[3010]: I1123 23:22:34.528505 3010 factory.go:221] Registration of the containerd container factory successfully Nov 23 23:22:34.549526 kubelet[3010]: I1123 23:22:34.549509 3010 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:22:34.549526 kubelet[3010]: I1123 23:22:34.549521 3010 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:22:34.549606 kubelet[3010]: I1123 23:22:34.549559 3010 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:22:34.628959 kubelet[3010]: E1123 23:22:34.628932 3010 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.1-a-2a92a9cf5f\" not found" Nov 23 23:22:34.656707 kubelet[3010]: I1123 23:22:34.656687 3010 policy_none.go:49] "None policy: Start" Nov 23 23:22:34.656707 kubelet[3010]: I1123 23:22:34.656704 3010 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:22:34.656707 kubelet[3010]: I1123 23:22:34.656713 3010 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:22:34.702354 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 23:22:34.711883 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 23:22:34.722906 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 23:22:34.727212 kubelet[3010]: I1123 23:22:34.725661 3010 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 23:22:34.727212 kubelet[3010]: I1123 23:22:34.725794 3010 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:22:34.727212 kubelet[3010]: I1123 23:22:34.725802 3010 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:22:34.727212 kubelet[3010]: I1123 23:22:34.726217 3010 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:22:34.728098 kubelet[3010]: E1123 23:22:34.728079 3010 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:22:34.728150 kubelet[3010]: E1123 23:22:34.728117 3010 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.1-a-2a92a9cf5f\" not found" Nov 23 23:22:34.728233 kubelet[3010]: I1123 23:22:34.728215 3010 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 23:22:34.729978 kubelet[3010]: I1123 23:22:34.729961 3010 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 23:22:34.730056 kubelet[3010]: I1123 23:22:34.730048 3010 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 23:22:34.730101 kubelet[3010]: E1123 23:22:34.730073 3010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-a-2a92a9cf5f?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="400ms" Nov 23 23:22:34.730149 kubelet[3010]: I1123 23:22:34.730139 3010 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:22:34.730185 kubelet[3010]: I1123 23:22:34.730178 3010 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 23:22:34.730269 kubelet[3010]: E1123 23:22:34.730260 3010 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Nov 23 23:22:34.732169 kubelet[3010]: W1123 23:22:34.732004 3010 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Nov 23 23:22:34.732285 kubelet[3010]: E1123 23:22:34.732263 3010 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:22:34.827765 kubelet[3010]: I1123 23:22:34.827689 3010 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:34.828014 kubelet[3010]: E1123 23:22:34.827987 3010 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:34.839159 systemd[1]: Created slice kubepods-burstable-pod15d03361ddf486fdd6106926b91d4859.slice - libcontainer container kubepods-burstable-pod15d03361ddf486fdd6106926b91d4859.slice. Nov 23 23:22:34.860315 kubelet[3010]: E1123 23:22:34.860162 3010 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-2a92a9cf5f\" not found" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:34.862744 systemd[1]: Created slice kubepods-burstable-podca09665864d0edfa206e690674137e87.slice - libcontainer container kubepods-burstable-podca09665864d0edfa206e690674137e87.slice. Nov 23 23:22:34.877176 kubelet[3010]: E1123 23:22:34.877155 3010 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-2a92a9cf5f\" not found" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:34.878502 systemd[1]: Created slice kubepods-burstable-pod5aab1c92f8d415212d585a1b7ca79b7d.slice - libcontainer container kubepods-burstable-pod5aab1c92f8d415212d585a1b7ca79b7d.slice. Nov 23 23:22:34.879868 kubelet[3010]: E1123 23:22:34.879760 3010 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-2a92a9cf5f\" not found" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:34.927128 kubelet[3010]: I1123 23:22:34.927107 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15d03361ddf486fdd6106926b91d4859-kubeconfig\") pod \"kube-scheduler-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"15d03361ddf486fdd6106926b91d4859\") " pod="kube-system/kube-scheduler-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:34.927360 kubelet[3010]: I1123 23:22:34.927220 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca09665864d0edfa206e690674137e87-ca-certs\") pod \"kube-apiserver-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"ca09665864d0edfa206e690674137e87\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:34.927360 kubelet[3010]: I1123 23:22:34.927241 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca09665864d0edfa206e690674137e87-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"ca09665864d0edfa206e690674137e87\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:34.927360 kubelet[3010]: I1123 23:22:34.927251 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5aab1c92f8d415212d585a1b7ca79b7d-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"5aab1c92f8d415212d585a1b7ca79b7d\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:34.927360 kubelet[3010]: I1123 23:22:34.927262 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5aab1c92f8d415212d585a1b7ca79b7d-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"5aab1c92f8d415212d585a1b7ca79b7d\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:34.927360 kubelet[3010]: I1123 23:22:34.927272 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5aab1c92f8d415212d585a1b7ca79b7d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"5aab1c92f8d415212d585a1b7ca79b7d\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:34.927478 kubelet[3010]: I1123 23:22:34.927281 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca09665864d0edfa206e690674137e87-k8s-certs\") pod \"kube-apiserver-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"ca09665864d0edfa206e690674137e87\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:34.927478 kubelet[3010]: I1123 23:22:34.927289 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5aab1c92f8d415212d585a1b7ca79b7d-ca-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"5aab1c92f8d415212d585a1b7ca79b7d\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:34.927478 kubelet[3010]: I1123 23:22:34.927321 3010 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5aab1c92f8d415212d585a1b7ca79b7d-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"5aab1c92f8d415212d585a1b7ca79b7d\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:35.029477 kubelet[3010]: I1123 23:22:35.029423 3010 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:35.029938 kubelet[3010]: E1123 23:22:35.029913 3010 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:35.130704 kubelet[3010]: E1123 23:22:35.130606 3010 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.1-a-2a92a9cf5f?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="800ms" Nov 23 23:22:35.161482 containerd[1900]: time="2025-11-23T23:22:35.161443738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.1-a-2a92a9cf5f,Uid:15d03361ddf486fdd6106926b91d4859,Namespace:kube-system,Attempt:0,}" Nov 23 23:22:35.178030 containerd[1900]: time="2025-11-23T23:22:35.177889191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.1-a-2a92a9cf5f,Uid:ca09665864d0edfa206e690674137e87,Namespace:kube-system,Attempt:0,}" Nov 23 23:22:35.180561 containerd[1900]: time="2025-11-23T23:22:35.180539799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f,Uid:5aab1c92f8d415212d585a1b7ca79b7d,Namespace:kube-system,Attempt:0,}" Nov 23 23:22:35.216646 containerd[1900]: time="2025-11-23T23:22:35.216612523Z" level=info msg="connecting to shim 26ef28c618bd776012c0cacea7c7b94c8e31eff99d10a73f9e7c639323fa472d" address="unix:///run/containerd/s/5686a6ff486a6d3a8bffd739686f4e533ad705b8638819b06266b90de1619509" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:22:35.238402 systemd[1]: Started cri-containerd-26ef28c618bd776012c0cacea7c7b94c8e31eff99d10a73f9e7c639323fa472d.scope - libcontainer container 26ef28c618bd776012c0cacea7c7b94c8e31eff99d10a73f9e7c639323fa472d. Nov 23 23:22:35.269276 containerd[1900]: time="2025-11-23T23:22:35.269229271Z" level=info msg="connecting to shim 7c6acd22720010f2c1234f17ee28e2b3887f12a5a11058fc945e9c43ca3d328c" address="unix:///run/containerd/s/7954158ff57d3c3333d3c5371aa85afdb72f7ed86d0282f90550f7e174eded62" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:22:35.276436 containerd[1900]: time="2025-11-23T23:22:35.276391502Z" level=info msg="connecting to shim d790d5364f23e4679b20ccde23717da0066507a865785fb62e2b328468d9253c" address="unix:///run/containerd/s/8cce8adf982f0686f72649f70b197e88f7510457f84f83ffc42a2e2d59179511" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:22:35.292610 containerd[1900]: time="2025-11-23T23:22:35.292544633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.1-a-2a92a9cf5f,Uid:15d03361ddf486fdd6106926b91d4859,Namespace:kube-system,Attempt:0,} returns sandbox id \"26ef28c618bd776012c0cacea7c7b94c8e31eff99d10a73f9e7c639323fa472d\"" Nov 23 23:22:35.295487 containerd[1900]: time="2025-11-23T23:22:35.295463106Z" level=info msg="CreateContainer within sandbox \"26ef28c618bd776012c0cacea7c7b94c8e31eff99d10a73f9e7c639323fa472d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 23:22:35.296423 systemd[1]: Started cri-containerd-7c6acd22720010f2c1234f17ee28e2b3887f12a5a11058fc945e9c43ca3d328c.scope - libcontainer container 7c6acd22720010f2c1234f17ee28e2b3887f12a5a11058fc945e9c43ca3d328c. Nov 23 23:22:35.297720 systemd[1]: Started cri-containerd-d790d5364f23e4679b20ccde23717da0066507a865785fb62e2b328468d9253c.scope - libcontainer container d790d5364f23e4679b20ccde23717da0066507a865785fb62e2b328468d9253c. Nov 23 23:22:35.314480 containerd[1900]: time="2025-11-23T23:22:35.314455356Z" level=info msg="Container 0e6b2491f264f4697ee5a1ef4d890a56a64b333b8a7e6986b7bb12b6366b9e62: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:22:35.331057 containerd[1900]: time="2025-11-23T23:22:35.331026749Z" level=info msg="CreateContainer within sandbox \"26ef28c618bd776012c0cacea7c7b94c8e31eff99d10a73f9e7c639323fa472d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0e6b2491f264f4697ee5a1ef4d890a56a64b333b8a7e6986b7bb12b6366b9e62\"" Nov 23 23:22:35.331620 containerd[1900]: time="2025-11-23T23:22:35.331589295Z" level=info msg="StartContainer for \"0e6b2491f264f4697ee5a1ef4d890a56a64b333b8a7e6986b7bb12b6366b9e62\"" Nov 23 23:22:35.332481 containerd[1900]: time="2025-11-23T23:22:35.332444964Z" level=info msg="connecting to shim 0e6b2491f264f4697ee5a1ef4d890a56a64b333b8a7e6986b7bb12b6366b9e62" address="unix:///run/containerd/s/5686a6ff486a6d3a8bffd739686f4e533ad705b8638819b06266b90de1619509" protocol=ttrpc version=3 Nov 23 23:22:35.334227 containerd[1900]: time="2025-11-23T23:22:35.334050617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.1-a-2a92a9cf5f,Uid:ca09665864d0edfa206e690674137e87,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c6acd22720010f2c1234f17ee28e2b3887f12a5a11058fc945e9c43ca3d328c\"" Nov 23 23:22:35.337673 containerd[1900]: time="2025-11-23T23:22:35.337336951Z" level=info msg="CreateContainer within sandbox \"7c6acd22720010f2c1234f17ee28e2b3887f12a5a11058fc945e9c43ca3d328c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 23:22:35.346411 systemd[1]: Started cri-containerd-0e6b2491f264f4697ee5a1ef4d890a56a64b333b8a7e6986b7bb12b6366b9e62.scope - libcontainer container 0e6b2491f264f4697ee5a1ef4d890a56a64b333b8a7e6986b7bb12b6366b9e62. Nov 23 23:22:35.353913 containerd[1900]: time="2025-11-23T23:22:35.353888135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f,Uid:5aab1c92f8d415212d585a1b7ca79b7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d790d5364f23e4679b20ccde23717da0066507a865785fb62e2b328468d9253c\"" Nov 23 23:22:35.356675 containerd[1900]: time="2025-11-23T23:22:35.356621074Z" level=info msg="Container 186554700190d2f56e11097e51295346f7ffe6dec84063665b8365bf8129a557: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:22:35.358342 containerd[1900]: time="2025-11-23T23:22:35.358271241Z" level=info msg="CreateContainer within sandbox \"d790d5364f23e4679b20ccde23717da0066507a865785fb62e2b328468d9253c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 23:22:35.380410 containerd[1900]: time="2025-11-23T23:22:35.380386091Z" level=info msg="StartContainer for \"0e6b2491f264f4697ee5a1ef4d890a56a64b333b8a7e6986b7bb12b6366b9e62\" returns successfully" Nov 23 23:22:35.380773 containerd[1900]: time="2025-11-23T23:22:35.380699621Z" level=info msg="CreateContainer within sandbox \"7c6acd22720010f2c1234f17ee28e2b3887f12a5a11058fc945e9c43ca3d328c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"186554700190d2f56e11097e51295346f7ffe6dec84063665b8365bf8129a557\"" Nov 23 23:22:35.381328 containerd[1900]: time="2025-11-23T23:22:35.381227679Z" level=info msg="StartContainer for \"186554700190d2f56e11097e51295346f7ffe6dec84063665b8365bf8129a557\"" Nov 23 23:22:35.382957 containerd[1900]: time="2025-11-23T23:22:35.382699144Z" level=info msg="connecting to shim 186554700190d2f56e11097e51295346f7ffe6dec84063665b8365bf8129a557" address="unix:///run/containerd/s/7954158ff57d3c3333d3c5371aa85afdb72f7ed86d0282f90550f7e174eded62" protocol=ttrpc version=3 Nov 23 23:22:35.391084 containerd[1900]: time="2025-11-23T23:22:35.390495740Z" level=info msg="Container e81e2b38a94169b29f6e1e3e607e679af7206b1cd117d775f353db7cc89ae737: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:22:35.402445 kubelet[3010]: W1123 23:22:35.402402 3010 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Nov 23 23:22:35.404596 kubelet[3010]: E1123 23:22:35.402453 3010 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:22:35.405405 systemd[1]: Started cri-containerd-186554700190d2f56e11097e51295346f7ffe6dec84063665b8365bf8129a557.scope - libcontainer container 186554700190d2f56e11097e51295346f7ffe6dec84063665b8365bf8129a557. Nov 23 23:22:35.407506 containerd[1900]: time="2025-11-23T23:22:35.407482723Z" level=info msg="CreateContainer within sandbox \"d790d5364f23e4679b20ccde23717da0066507a865785fb62e2b328468d9253c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e81e2b38a94169b29f6e1e3e607e679af7206b1cd117d775f353db7cc89ae737\"" Nov 23 23:22:35.408214 containerd[1900]: time="2025-11-23T23:22:35.408163306Z" level=info msg="StartContainer for \"e81e2b38a94169b29f6e1e3e607e679af7206b1cd117d775f353db7cc89ae737\"" Nov 23 23:22:35.409312 containerd[1900]: time="2025-11-23T23:22:35.409137642Z" level=info msg="connecting to shim e81e2b38a94169b29f6e1e3e607e679af7206b1cd117d775f353db7cc89ae737" address="unix:///run/containerd/s/8cce8adf982f0686f72649f70b197e88f7510457f84f83ffc42a2e2d59179511" protocol=ttrpc version=3 Nov 23 23:22:35.423382 systemd[1]: Started cri-containerd-e81e2b38a94169b29f6e1e3e607e679af7206b1cd117d775f353db7cc89ae737.scope - libcontainer container e81e2b38a94169b29f6e1e3e607e679af7206b1cd117d775f353db7cc89ae737. Nov 23 23:22:35.428228 kubelet[3010]: W1123 23:22:35.428184 3010 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Nov 23 23:22:35.428228 kubelet[3010]: E1123 23:22:35.428229 3010 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.35:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:22:35.433257 kubelet[3010]: I1123 23:22:35.433184 3010 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:35.436440 kubelet[3010]: E1123 23:22:35.436248 3010 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:35.465464 containerd[1900]: time="2025-11-23T23:22:35.465344237Z" level=info msg="StartContainer for \"186554700190d2f56e11097e51295346f7ffe6dec84063665b8365bf8129a557\" returns successfully" Nov 23 23:22:35.472873 containerd[1900]: time="2025-11-23T23:22:35.472846711Z" level=info msg="StartContainer for \"e81e2b38a94169b29f6e1e3e607e679af7206b1cd117d775f353db7cc89ae737\" returns successfully" Nov 23 23:22:35.738375 kubelet[3010]: E1123 23:22:35.738080 3010 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-2a92a9cf5f\" not found" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:35.742514 kubelet[3010]: E1123 23:22:35.742496 3010 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-2a92a9cf5f\" not found" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:35.743814 kubelet[3010]: E1123 23:22:35.743798 3010 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-2a92a9cf5f\" not found" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:36.238442 kubelet[3010]: I1123 23:22:36.238412 3010 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:36.745314 kubelet[3010]: E1123 23:22:36.745277 3010 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-2a92a9cf5f\" not found" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:36.746646 kubelet[3010]: E1123 23:22:36.746428 3010 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.1-a-2a92a9cf5f\" not found" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:36.894122 kubelet[3010]: E1123 23:22:36.894095 3010 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.1-a-2a92a9cf5f\" not found" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:36.915498 kubelet[3010]: I1123 23:22:36.915469 3010 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:36.930023 kubelet[3010]: I1123 23:22:36.929890 3010 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:36.950982 kubelet[3010]: E1123 23:22:36.950958 3010 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.1-a-2a92a9cf5f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:36.950982 kubelet[3010]: I1123 23:22:36.950979 3010 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:36.953175 kubelet[3010]: E1123 23:22:36.953107 3010 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4459.2.1-a-2a92a9cf5f.187ac6398b0e1bd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.1-a-2a92a9cf5f,UID:ci-4459.2.1-a-2a92a9cf5f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.1-a-2a92a9cf5f,},FirstTimestamp:2025-11-23 23:22:34.517363664 +0000 UTC m=+0.370551289,LastTimestamp:2025-11-23 23:22:34.517363664 +0000 UTC m=+0.370551289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.1-a-2a92a9cf5f,}" Nov 23 23:22:36.956774 kubelet[3010]: E1123 23:22:36.956749 3010 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-a-2a92a9cf5f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:36.956774 kubelet[3010]: I1123 23:22:36.956769 3010 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:36.959595 kubelet[3010]: E1123 23:22:36.959573 3010 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:37.089185 kubelet[3010]: I1123 23:22:37.089082 3010 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:37.092545 kubelet[3010]: E1123 23:22:37.092519 3010 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:37.521914 kubelet[3010]: I1123 23:22:37.521869 3010 apiserver.go:52] "Watching apiserver" Nov 23 23:22:37.526752 kubelet[3010]: I1123 23:22:37.526729 3010 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:22:38.923107 kubelet[3010]: I1123 23:22:38.923065 3010 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:38.931476 kubelet[3010]: W1123 23:22:38.931300 3010 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 23 23:22:39.195884 systemd[1]: Reload requested from client PID 3278 ('systemctl') (unit session-9.scope)... Nov 23 23:22:39.196188 systemd[1]: Reloading... Nov 23 23:22:39.271320 zram_generator::config[3325]: No configuration found. Nov 23 23:22:39.425628 systemd[1]: Reloading finished in 229 ms. Nov 23 23:22:39.446989 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:22:39.461897 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:22:39.462213 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:22:39.462342 systemd[1]: kubelet.service: Consumed 605ms CPU time, 127.2M memory peak. Nov 23 23:22:39.463671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:22:39.564817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:22:39.571580 (kubelet)[3389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:22:39.602435 kubelet[3389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:22:39.602645 kubelet[3389]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:22:39.602685 kubelet[3389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:22:39.602809 kubelet[3389]: I1123 23:22:39.602784 3389 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:22:39.607520 kubelet[3389]: I1123 23:22:39.607499 3389 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 23:22:39.608323 kubelet[3389]: I1123 23:22:39.607596 3389 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:22:39.608323 kubelet[3389]: I1123 23:22:39.607773 3389 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 23:22:39.608793 kubelet[3389]: I1123 23:22:39.608773 3389 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 23 23:22:39.610346 kubelet[3389]: I1123 23:22:39.610326 3389 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:22:39.613199 kubelet[3389]: I1123 23:22:39.613177 3389 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:22:39.615853 kubelet[3389]: I1123 23:22:39.615832 3389 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:22:39.616012 kubelet[3389]: I1123 23:22:39.615990 3389 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:22:39.616121 kubelet[3389]: I1123 23:22:39.616010 3389 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.1-a-2a92a9cf5f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:22:39.616184 kubelet[3389]: I1123 23:22:39.616125 3389 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:22:39.616184 kubelet[3389]: I1123 23:22:39.616131 3389 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 23:22:39.616184 kubelet[3389]: I1123 23:22:39.616162 3389 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:22:39.616264 kubelet[3389]: I1123 23:22:39.616251 3389 kubelet.go:446] "Attempting to sync node with API server" Nov 23 23:22:39.616289 kubelet[3389]: I1123 23:22:39.616267 3389 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:22:39.616289 kubelet[3389]: I1123 23:22:39.616282 3389 kubelet.go:352] "Adding apiserver pod source" Nov 23 23:22:39.616289 kubelet[3389]: I1123 23:22:39.616289 3389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:22:39.618823 kubelet[3389]: I1123 23:22:39.618725 3389 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:22:39.619172 kubelet[3389]: I1123 23:22:39.619160 3389 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 23:22:39.620026 kubelet[3389]: I1123 23:22:39.620013 3389 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:22:39.620115 kubelet[3389]: I1123 23:22:39.620107 3389 server.go:1287] "Started kubelet" Nov 23 23:22:39.623029 kubelet[3389]: I1123 23:22:39.622926 3389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:22:39.624371 kubelet[3389]: I1123 23:22:39.624345 3389 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:22:39.631656 kubelet[3389]: I1123 23:22:39.631642 3389 server.go:479] "Adding debug handlers to kubelet server" Nov 23 23:22:39.632192 kubelet[3389]: I1123 23:22:39.624653 3389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:22:39.632382 kubelet[3389]: I1123 23:22:39.624459 3389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:22:39.632594 kubelet[3389]: I1123 23:22:39.632580 3389 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:22:39.632679 kubelet[3389]: E1123 23:22:39.625407 3389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.2.1-a-2a92a9cf5f\" not found" Nov 23 23:22:39.632724 kubelet[3389]: I1123 23:22:39.628799 3389 factory.go:221] Registration of the systemd container factory successfully Nov 23 23:22:39.632829 kubelet[3389]: I1123 23:22:39.632814 3389 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:22:39.633126 kubelet[3389]: I1123 23:22:39.625332 3389 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:22:39.633354 kubelet[3389]: I1123 23:22:39.625339 3389 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:22:39.633417 kubelet[3389]: I1123 23:22:39.631019 3389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 23:22:39.634168 kubelet[3389]: I1123 23:22:39.634153 3389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 23:22:39.634251 kubelet[3389]: I1123 23:22:39.634242 3389 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 23:22:39.634351 kubelet[3389]: I1123 23:22:39.634291 3389 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:22:39.634406 kubelet[3389]: I1123 23:22:39.634398 3389 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 23:22:39.634472 kubelet[3389]: E1123 23:22:39.634461 3389 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:22:39.634756 kubelet[3389]: I1123 23:22:39.634742 3389 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:22:39.640315 kubelet[3389]: I1123 23:22:39.640287 3389 factory.go:221] Registration of the containerd container factory successfully Nov 23 23:22:39.648745 kubelet[3389]: E1123 23:22:39.648728 3389 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:22:39.689744 kubelet[3389]: I1123 23:22:39.689722 3389 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:22:39.689744 kubelet[3389]: I1123 23:22:39.689739 3389 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:22:39.689854 kubelet[3389]: I1123 23:22:39.689756 3389 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:22:39.689880 kubelet[3389]: I1123 23:22:39.689869 3389 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 23:22:39.689897 kubelet[3389]: I1123 23:22:39.689877 3389 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 23:22:39.689897 kubelet[3389]: I1123 23:22:39.689889 3389 policy_none.go:49] "None policy: Start" Nov 23 23:22:39.689897 kubelet[3389]: I1123 23:22:39.689896 3389 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:22:39.689937 kubelet[3389]: I1123 23:22:39.689903 3389 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:22:39.689992 kubelet[3389]: I1123 23:22:39.689966 3389 state_mem.go:75] "Updated machine memory state" Nov 23 23:22:39.692948 kubelet[3389]: I1123 23:22:39.692929 3389 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 23:22:39.693069 kubelet[3389]: I1123 23:22:39.693054 3389 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:22:39.693100 kubelet[3389]: I1123 23:22:39.693067 3389 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:22:39.694677 kubelet[3389]: I1123 23:22:39.694642 3389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:22:39.695674 kubelet[3389]: E1123 23:22:39.695659 3389 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:22:39.735757 kubelet[3389]: I1123 23:22:39.735627 3389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.735757 kubelet[3389]: I1123 23:22:39.735651 3389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.736773 kubelet[3389]: I1123 23:22:39.736752 3389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.742966 kubelet[3389]: W1123 23:22:39.742941 3389 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 23 23:22:39.746766 kubelet[3389]: W1123 23:22:39.746750 3389 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 23 23:22:39.747367 kubelet[3389]: W1123 23:22:39.747346 3389 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 23 23:22:39.747429 kubelet[3389]: E1123 23:22:39.747383 3389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-a-2a92a9cf5f\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.795569 kubelet[3389]: I1123 23:22:39.795545 3389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.808686 kubelet[3389]: I1123 23:22:39.808470 3389 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.808686 kubelet[3389]: I1123 23:22:39.808529 3389 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.835676 kubelet[3389]: I1123 23:22:39.835646 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5aab1c92f8d415212d585a1b7ca79b7d-ca-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"5aab1c92f8d415212d585a1b7ca79b7d\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.835888 kubelet[3389]: I1123 23:22:39.835834 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5aab1c92f8d415212d585a1b7ca79b7d-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"5aab1c92f8d415212d585a1b7ca79b7d\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.835888 kubelet[3389]: I1123 23:22:39.835854 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/15d03361ddf486fdd6106926b91d4859-kubeconfig\") pod \"kube-scheduler-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"15d03361ddf486fdd6106926b91d4859\") " pod="kube-system/kube-scheduler-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.835888 kubelet[3389]: I1123 23:22:39.835865 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca09665864d0edfa206e690674137e87-ca-certs\") pod \"kube-apiserver-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"ca09665864d0edfa206e690674137e87\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.836015 kubelet[3389]: I1123 23:22:39.835998 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca09665864d0edfa206e690674137e87-k8s-certs\") pod \"kube-apiserver-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"ca09665864d0edfa206e690674137e87\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.836136 kubelet[3389]: I1123 23:22:39.836097 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca09665864d0edfa206e690674137e87-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"ca09665864d0edfa206e690674137e87\") " pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.836136 kubelet[3389]: I1123 23:22:39.836112 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5aab1c92f8d415212d585a1b7ca79b7d-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"5aab1c92f8d415212d585a1b7ca79b7d\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.836136 kubelet[3389]: I1123 23:22:39.836121 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5aab1c92f8d415212d585a1b7ca79b7d-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"5aab1c92f8d415212d585a1b7ca79b7d\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:39.836246 kubelet[3389]: I1123 23:22:39.836234 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5aab1c92f8d415212d585a1b7ca79b7d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f\" (UID: \"5aab1c92f8d415212d585a1b7ca79b7d\") " pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:40.617611 kubelet[3389]: I1123 23:22:40.617574 3389 apiserver.go:52] "Watching apiserver" Nov 23 23:22:40.634739 kubelet[3389]: I1123 23:22:40.633719 3389 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:22:40.679327 kubelet[3389]: I1123 23:22:40.679148 3389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:40.679572 kubelet[3389]: I1123 23:22:40.679560 3389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:40.692550 kubelet[3389]: W1123 23:22:40.692355 3389 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 23 23:22:40.692550 kubelet[3389]: W1123 23:22:40.692368 3389 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 23 23:22:40.692550 kubelet[3389]: E1123 23:22:40.692399 3389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.1-a-2a92a9cf5f\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:40.692550 kubelet[3389]: E1123 23:22:40.692407 3389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.1-a-2a92a9cf5f\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:22:40.695244 kubelet[3389]: I1123 23:22:40.695176 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.1-a-2a92a9cf5f" podStartSLOduration=2.695168173 podStartE2EDuration="2.695168173s" podCreationTimestamp="2025-11-23 23:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:22:40.694725751 +0000 UTC m=+1.118741626" watchObservedRunningTime="2025-11-23 23:22:40.695168173 +0000 UTC m=+1.119184048" Nov 23 23:22:40.715921 kubelet[3389]: I1123 23:22:40.715793 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.1-a-2a92a9cf5f" podStartSLOduration=1.715778792 podStartE2EDuration="1.715778792s" podCreationTimestamp="2025-11-23 23:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:22:40.703945075 +0000 UTC m=+1.127960950" watchObservedRunningTime="2025-11-23 23:22:40.715778792 +0000 UTC m=+1.139794667" Nov 23 23:22:40.728916 kubelet[3389]: I1123 23:22:40.728834 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.1-a-2a92a9cf5f" podStartSLOduration=1.728824258 podStartE2EDuration="1.728824258s" podCreationTimestamp="2025-11-23 23:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:22:40.716315392 +0000 UTC m=+1.140331267" watchObservedRunningTime="2025-11-23 23:22:40.728824258 +0000 UTC m=+1.152840133" Nov 23 23:22:45.856193 kubelet[3389]: I1123 23:22:45.856052 3389 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 23:22:45.856625 kubelet[3389]: I1123 23:22:45.856478 3389 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 23:22:45.856655 containerd[1900]: time="2025-11-23T23:22:45.856339269Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 23:22:46.744924 systemd[1]: Created slice kubepods-besteffort-podd5feeabc_879d_4cdf_8586_d2b145338047.slice - libcontainer container kubepods-besteffort-podd5feeabc_879d_4cdf_8586_d2b145338047.slice. Nov 23 23:22:46.778998 kubelet[3389]: I1123 23:22:46.778972 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5feeabc-879d-4cdf-8586-d2b145338047-xtables-lock\") pod \"kube-proxy-hl9kt\" (UID: \"d5feeabc-879d-4cdf-8586-d2b145338047\") " pod="kube-system/kube-proxy-hl9kt" Nov 23 23:22:46.779202 kubelet[3389]: I1123 23:22:46.779189 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5feeabc-879d-4cdf-8586-d2b145338047-lib-modules\") pod \"kube-proxy-hl9kt\" (UID: \"d5feeabc-879d-4cdf-8586-d2b145338047\") " pod="kube-system/kube-proxy-hl9kt" Nov 23 23:22:46.779354 kubelet[3389]: I1123 23:22:46.779341 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d5feeabc-879d-4cdf-8586-d2b145338047-kube-proxy\") pod \"kube-proxy-hl9kt\" (UID: \"d5feeabc-879d-4cdf-8586-d2b145338047\") " pod="kube-system/kube-proxy-hl9kt" Nov 23 23:22:46.779850 kubelet[3389]: I1123 23:22:46.779833 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88bfs\" (UniqueName: \"kubernetes.io/projected/d5feeabc-879d-4cdf-8586-d2b145338047-kube-api-access-88bfs\") pod \"kube-proxy-hl9kt\" (UID: \"d5feeabc-879d-4cdf-8586-d2b145338047\") " pod="kube-system/kube-proxy-hl9kt" Nov 23 23:22:46.965484 systemd[1]: Created slice kubepods-besteffort-podbe1a5596_e233_461e_a805_cdbe0dae48b5.slice - libcontainer container kubepods-besteffort-podbe1a5596_e233_461e_a805_cdbe0dae48b5.slice. Nov 23 23:22:46.982064 kubelet[3389]: I1123 23:22:46.982036 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsdtr\" (UniqueName: \"kubernetes.io/projected/be1a5596-e233-461e-a805-cdbe0dae48b5-kube-api-access-bsdtr\") pod \"tigera-operator-7dcd859c48-5ktnt\" (UID: \"be1a5596-e233-461e-a805-cdbe0dae48b5\") " pod="tigera-operator/tigera-operator-7dcd859c48-5ktnt" Nov 23 23:22:46.982064 kubelet[3389]: I1123 23:22:46.982066 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/be1a5596-e233-461e-a805-cdbe0dae48b5-var-lib-calico\") pod \"tigera-operator-7dcd859c48-5ktnt\" (UID: \"be1a5596-e233-461e-a805-cdbe0dae48b5\") " pod="tigera-operator/tigera-operator-7dcd859c48-5ktnt" Nov 23 23:22:47.053711 containerd[1900]: time="2025-11-23T23:22:47.053618256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hl9kt,Uid:d5feeabc-879d-4cdf-8586-d2b145338047,Namespace:kube-system,Attempt:0,}" Nov 23 23:22:47.092958 containerd[1900]: time="2025-11-23T23:22:47.092921093Z" level=info msg="connecting to shim 1f807e666521632ab7c09ed61869571ed29a3be14f7dc4c2a7dab28d778b018d" address="unix:///run/containerd/s/e5710071ecbcc1ae4d3c046ec5fb52b929181a71e61e97b5bf2ea34bbf033f23" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:22:47.117431 systemd[1]: Started cri-containerd-1f807e666521632ab7c09ed61869571ed29a3be14f7dc4c2a7dab28d778b018d.scope - libcontainer container 1f807e666521632ab7c09ed61869571ed29a3be14f7dc4c2a7dab28d778b018d. Nov 23 23:22:47.137903 containerd[1900]: time="2025-11-23T23:22:47.137863319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hl9kt,Uid:d5feeabc-879d-4cdf-8586-d2b145338047,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f807e666521632ab7c09ed61869571ed29a3be14f7dc4c2a7dab28d778b018d\"" Nov 23 23:22:47.142946 containerd[1900]: time="2025-11-23T23:22:47.142911977Z" level=info msg="CreateContainer within sandbox \"1f807e666521632ab7c09ed61869571ed29a3be14f7dc4c2a7dab28d778b018d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 23:22:47.166586 containerd[1900]: time="2025-11-23T23:22:47.164866682Z" level=info msg="Container 6576b42be8e93e89e0b42d698abfb5d0af7d307994be063cbf4bd99b28ae0c9a: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:22:47.183393 containerd[1900]: time="2025-11-23T23:22:47.183359547Z" level=info msg="CreateContainer within sandbox \"1f807e666521632ab7c09ed61869571ed29a3be14f7dc4c2a7dab28d778b018d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6576b42be8e93e89e0b42d698abfb5d0af7d307994be063cbf4bd99b28ae0c9a\"" Nov 23 23:22:47.183833 containerd[1900]: time="2025-11-23T23:22:47.183817114Z" level=info msg="StartContainer for \"6576b42be8e93e89e0b42d698abfb5d0af7d307994be063cbf4bd99b28ae0c9a\"" Nov 23 23:22:47.185085 containerd[1900]: time="2025-11-23T23:22:47.185007232Z" level=info msg="connecting to shim 6576b42be8e93e89e0b42d698abfb5d0af7d307994be063cbf4bd99b28ae0c9a" address="unix:///run/containerd/s/e5710071ecbcc1ae4d3c046ec5fb52b929181a71e61e97b5bf2ea34bbf033f23" protocol=ttrpc version=3 Nov 23 23:22:47.198425 systemd[1]: Started cri-containerd-6576b42be8e93e89e0b42d698abfb5d0af7d307994be063cbf4bd99b28ae0c9a.scope - libcontainer container 6576b42be8e93e89e0b42d698abfb5d0af7d307994be063cbf4bd99b28ae0c9a. Nov 23 23:22:47.265563 containerd[1900]: time="2025-11-23T23:22:47.265534152Z" level=info msg="StartContainer for \"6576b42be8e93e89e0b42d698abfb5d0af7d307994be063cbf4bd99b28ae0c9a\" returns successfully" Nov 23 23:22:47.270487 containerd[1900]: time="2025-11-23T23:22:47.270385476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5ktnt,Uid:be1a5596-e233-461e-a805-cdbe0dae48b5,Namespace:tigera-operator,Attempt:0,}" Nov 23 23:22:47.309219 containerd[1900]: time="2025-11-23T23:22:47.308919472Z" level=info msg="connecting to shim 9ed29217a02a4695d5b0a5e43b12eedc8f2ac1f4a30b30689f3983a9e4d1af5e" address="unix:///run/containerd/s/dbea22744e3f368d0faf339317831b534ff1a3f9de0b84f6f3a856fc722fbc68" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:22:47.327447 systemd[1]: Started cri-containerd-9ed29217a02a4695d5b0a5e43b12eedc8f2ac1f4a30b30689f3983a9e4d1af5e.scope - libcontainer container 9ed29217a02a4695d5b0a5e43b12eedc8f2ac1f4a30b30689f3983a9e4d1af5e. Nov 23 23:22:47.359410 containerd[1900]: time="2025-11-23T23:22:47.359331162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5ktnt,Uid:be1a5596-e233-461e-a805-cdbe0dae48b5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9ed29217a02a4695d5b0a5e43b12eedc8f2ac1f4a30b30689f3983a9e4d1af5e\"" Nov 23 23:22:47.361265 containerd[1900]: time="2025-11-23T23:22:47.361023552Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 23:22:47.717376 kubelet[3389]: I1123 23:22:47.717260 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hl9kt" podStartSLOduration=1.717245503 podStartE2EDuration="1.717245503s" podCreationTimestamp="2025-11-23 23:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:22:47.717052609 +0000 UTC m=+8.141068492" watchObservedRunningTime="2025-11-23 23:22:47.717245503 +0000 UTC m=+8.141261370" Nov 23 23:22:48.848902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2085601514.mount: Deactivated successfully. Nov 23 23:22:49.385193 containerd[1900]: time="2025-11-23T23:22:49.385153536Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:49.388759 containerd[1900]: time="2025-11-23T23:22:49.388728691Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 23:22:49.394395 containerd[1900]: time="2025-11-23T23:22:49.394358912Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:49.400496 containerd[1900]: time="2025-11-23T23:22:49.400440435Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:22:49.400714 containerd[1900]: time="2025-11-23T23:22:49.400690467Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.039641538s" Nov 23 23:22:49.400714 containerd[1900]: time="2025-11-23T23:22:49.400714372Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 23:22:49.404079 containerd[1900]: time="2025-11-23T23:22:49.403348008Z" level=info msg="CreateContainer within sandbox \"9ed29217a02a4695d5b0a5e43b12eedc8f2ac1f4a30b30689f3983a9e4d1af5e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 23:22:49.421496 containerd[1900]: time="2025-11-23T23:22:49.421475022Z" level=info msg="Container 2930391e6114f97a9d10b4999304db2c0e97c397e0a124a3ccff3b36a40354c2: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:22:49.424358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2675494044.mount: Deactivated successfully. Nov 23 23:22:49.435863 containerd[1900]: time="2025-11-23T23:22:49.435836099Z" level=info msg="CreateContainer within sandbox \"9ed29217a02a4695d5b0a5e43b12eedc8f2ac1f4a30b30689f3983a9e4d1af5e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2930391e6114f97a9d10b4999304db2c0e97c397e0a124a3ccff3b36a40354c2\"" Nov 23 23:22:49.436718 containerd[1900]: time="2025-11-23T23:22:49.436333779Z" level=info msg="StartContainer for \"2930391e6114f97a9d10b4999304db2c0e97c397e0a124a3ccff3b36a40354c2\"" Nov 23 23:22:49.437098 containerd[1900]: time="2025-11-23T23:22:49.437070786Z" level=info msg="connecting to shim 2930391e6114f97a9d10b4999304db2c0e97c397e0a124a3ccff3b36a40354c2" address="unix:///run/containerd/s/dbea22744e3f368d0faf339317831b534ff1a3f9de0b84f6f3a856fc722fbc68" protocol=ttrpc version=3 Nov 23 23:22:49.452419 systemd[1]: Started cri-containerd-2930391e6114f97a9d10b4999304db2c0e97c397e0a124a3ccff3b36a40354c2.scope - libcontainer container 2930391e6114f97a9d10b4999304db2c0e97c397e0a124a3ccff3b36a40354c2. Nov 23 23:22:49.475877 containerd[1900]: time="2025-11-23T23:22:49.475841414Z" level=info msg="StartContainer for \"2930391e6114f97a9d10b4999304db2c0e97c397e0a124a3ccff3b36a40354c2\" returns successfully" Nov 23 23:22:51.715571 kubelet[3389]: I1123 23:22:51.715522 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-5ktnt" podStartSLOduration=3.674551182 podStartE2EDuration="5.715509514s" podCreationTimestamp="2025-11-23 23:22:46 +0000 UTC" firstStartedPulling="2025-11-23 23:22:47.360287601 +0000 UTC m=+7.784303468" lastFinishedPulling="2025-11-23 23:22:49.401245933 +0000 UTC m=+9.825261800" observedRunningTime="2025-11-23 23:22:49.70781537 +0000 UTC m=+10.131831245" watchObservedRunningTime="2025-11-23 23:22:51.715509514 +0000 UTC m=+12.139525381" Nov 23 23:22:54.416648 sudo[2367]: pam_unix(sudo:session): session closed for user root Nov 23 23:22:54.491365 sshd[2366]: Connection closed by 10.200.16.10 port 40960 Nov 23 23:22:54.495444 sshd-session[2363]: pam_unix(sshd:session): session closed for user core Nov 23 23:22:54.497860 systemd[1]: sshd@6-10.200.20.35:22-10.200.16.10:40960.service: Deactivated successfully. Nov 23 23:22:54.500015 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 23:22:54.501563 systemd[1]: session-9.scope: Consumed 3.144s CPU time, 221.1M memory peak. Nov 23 23:22:54.503426 systemd-logind[1875]: Session 9 logged out. Waiting for processes to exit. Nov 23 23:22:54.506328 systemd-logind[1875]: Removed session 9. Nov 23 23:23:00.704957 systemd[1]: Created slice kubepods-besteffort-pod47d06d7d_6625_4e5c_a944_7bfd040d1061.slice - libcontainer container kubepods-besteffort-pod47d06d7d_6625_4e5c_a944_7bfd040d1061.slice. Nov 23 23:23:00.764811 kubelet[3389]: I1123 23:23:00.764763 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47d06d7d-6625-4e5c-a944-7bfd040d1061-tigera-ca-bundle\") pod \"calico-typha-57cb98656-9nv9g\" (UID: \"47d06d7d-6625-4e5c-a944-7bfd040d1061\") " pod="calico-system/calico-typha-57cb98656-9nv9g" Nov 23 23:23:00.765341 kubelet[3389]: I1123 23:23:00.764960 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/47d06d7d-6625-4e5c-a944-7bfd040d1061-typha-certs\") pod \"calico-typha-57cb98656-9nv9g\" (UID: \"47d06d7d-6625-4e5c-a944-7bfd040d1061\") " pod="calico-system/calico-typha-57cb98656-9nv9g" Nov 23 23:23:00.765341 kubelet[3389]: I1123 23:23:00.764977 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9l59\" (UniqueName: \"kubernetes.io/projected/47d06d7d-6625-4e5c-a944-7bfd040d1061-kube-api-access-k9l59\") pod \"calico-typha-57cb98656-9nv9g\" (UID: \"47d06d7d-6625-4e5c-a944-7bfd040d1061\") " pod="calico-system/calico-typha-57cb98656-9nv9g" Nov 23 23:23:00.948214 systemd[1]: Created slice kubepods-besteffort-podb7260e66_02c6_4ba5_9e9f_8184e7b83e36.slice - libcontainer container kubepods-besteffort-podb7260e66_02c6_4ba5_9e9f_8184e7b83e36.slice. Nov 23 23:23:00.965869 kubelet[3389]: I1123 23:23:00.965786 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b7260e66-02c6-4ba5-9e9f-8184e7b83e36-cni-log-dir\") pod \"calico-node-lrk2z\" (UID: \"b7260e66-02c6-4ba5-9e9f-8184e7b83e36\") " pod="calico-system/calico-node-lrk2z" Nov 23 23:23:00.966527 kubelet[3389]: I1123 23:23:00.966218 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b7260e66-02c6-4ba5-9e9f-8184e7b83e36-flexvol-driver-host\") pod \"calico-node-lrk2z\" (UID: \"b7260e66-02c6-4ba5-9e9f-8184e7b83e36\") " pod="calico-system/calico-node-lrk2z" Nov 23 23:23:00.966527 kubelet[3389]: I1123 23:23:00.966245 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b7260e66-02c6-4ba5-9e9f-8184e7b83e36-node-certs\") pod \"calico-node-lrk2z\" (UID: \"b7260e66-02c6-4ba5-9e9f-8184e7b83e36\") " pod="calico-system/calico-node-lrk2z" Nov 23 23:23:00.966527 kubelet[3389]: I1123 23:23:00.966400 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7260e66-02c6-4ba5-9e9f-8184e7b83e36-tigera-ca-bundle\") pod \"calico-node-lrk2z\" (UID: \"b7260e66-02c6-4ba5-9e9f-8184e7b83e36\") " pod="calico-system/calico-node-lrk2z" Nov 23 23:23:00.966527 kubelet[3389]: I1123 23:23:00.966418 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b7260e66-02c6-4ba5-9e9f-8184e7b83e36-var-run-calico\") pod \"calico-node-lrk2z\" (UID: \"b7260e66-02c6-4ba5-9e9f-8184e7b83e36\") " pod="calico-system/calico-node-lrk2z" Nov 23 23:23:00.966813 kubelet[3389]: I1123 23:23:00.966432 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b7260e66-02c6-4ba5-9e9f-8184e7b83e36-cni-bin-dir\") pod \"calico-node-lrk2z\" (UID: \"b7260e66-02c6-4ba5-9e9f-8184e7b83e36\") " pod="calico-system/calico-node-lrk2z" Nov 23 23:23:00.966813 kubelet[3389]: I1123 23:23:00.966769 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b7260e66-02c6-4ba5-9e9f-8184e7b83e36-cni-net-dir\") pod \"calico-node-lrk2z\" (UID: \"b7260e66-02c6-4ba5-9e9f-8184e7b83e36\") " pod="calico-system/calico-node-lrk2z" Nov 23 23:23:00.966813 kubelet[3389]: I1123 23:23:00.966796 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b7260e66-02c6-4ba5-9e9f-8184e7b83e36-policysync\") pod \"calico-node-lrk2z\" (UID: \"b7260e66-02c6-4ba5-9e9f-8184e7b83e36\") " pod="calico-system/calico-node-lrk2z" Nov 23 23:23:00.967133 kubelet[3389]: I1123 23:23:00.967000 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7260e66-02c6-4ba5-9e9f-8184e7b83e36-xtables-lock\") pod \"calico-node-lrk2z\" (UID: \"b7260e66-02c6-4ba5-9e9f-8184e7b83e36\") " pod="calico-system/calico-node-lrk2z" Nov 23 23:23:00.967133 kubelet[3389]: I1123 23:23:00.967022 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67m8m\" (UniqueName: \"kubernetes.io/projected/b7260e66-02c6-4ba5-9e9f-8184e7b83e36-kube-api-access-67m8m\") pod \"calico-node-lrk2z\" (UID: \"b7260e66-02c6-4ba5-9e9f-8184e7b83e36\") " pod="calico-system/calico-node-lrk2z" Nov 23 23:23:00.967133 kubelet[3389]: I1123 23:23:00.967034 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b7260e66-02c6-4ba5-9e9f-8184e7b83e36-var-lib-calico\") pod \"calico-node-lrk2z\" (UID: \"b7260e66-02c6-4ba5-9e9f-8184e7b83e36\") " pod="calico-system/calico-node-lrk2z" Nov 23 23:23:00.967356 kubelet[3389]: I1123 23:23:00.967045 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7260e66-02c6-4ba5-9e9f-8184e7b83e36-lib-modules\") pod \"calico-node-lrk2z\" (UID: \"b7260e66-02c6-4ba5-9e9f-8184e7b83e36\") " pod="calico-system/calico-node-lrk2z" Nov 23 23:23:01.010289 containerd[1900]: time="2025-11-23T23:23:01.010251531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57cb98656-9nv9g,Uid:47d06d7d-6625-4e5c-a944-7bfd040d1061,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:01.062808 containerd[1900]: time="2025-11-23T23:23:01.062771924Z" level=info msg="connecting to shim 492624059d9105d67e82c249ec8717b637b9afe3f6451becc312d3bc9324f6c0" address="unix:///run/containerd/s/c5351ce2567d0ec4ca7b0214c4d2103431b74329aab3a193f8d800c9999e5887" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:01.073302 kubelet[3389]: E1123 23:23:01.073220 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.073302 kubelet[3389]: W1123 23:23:01.073239 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.073302 kubelet[3389]: E1123 23:23:01.073263 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.080474 kubelet[3389]: E1123 23:23:01.080458 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.080693 kubelet[3389]: W1123 23:23:01.080544 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.080693 kubelet[3389]: E1123 23:23:01.080563 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.082027 kubelet[3389]: E1123 23:23:01.081902 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.082027 kubelet[3389]: W1123 23:23:01.081915 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.082027 kubelet[3389]: E1123 23:23:01.081926 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.095439 systemd[1]: Started cri-containerd-492624059d9105d67e82c249ec8717b637b9afe3f6451becc312d3bc9324f6c0.scope - libcontainer container 492624059d9105d67e82c249ec8717b637b9afe3f6451becc312d3bc9324f6c0. Nov 23 23:23:01.130411 kubelet[3389]: E1123 23:23:01.129024 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:23:01.148262 containerd[1900]: time="2025-11-23T23:23:01.148226372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57cb98656-9nv9g,Uid:47d06d7d-6625-4e5c-a944-7bfd040d1061,Namespace:calico-system,Attempt:0,} returns sandbox id \"492624059d9105d67e82c249ec8717b637b9afe3f6451becc312d3bc9324f6c0\"" Nov 23 23:23:01.152086 containerd[1900]: time="2025-11-23T23:23:01.152069891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 23:23:01.153333 kubelet[3389]: E1123 23:23:01.153285 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.153484 kubelet[3389]: W1123 23:23:01.153416 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.153484 kubelet[3389]: E1123 23:23:01.153434 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.154152 kubelet[3389]: E1123 23:23:01.154002 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.154250 kubelet[3389]: W1123 23:23:01.154025 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.154535 kubelet[3389]: E1123 23:23:01.154523 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.154986 kubelet[3389]: E1123 23:23:01.154929 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.154986 kubelet[3389]: W1123 23:23:01.154941 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.154986 kubelet[3389]: E1123 23:23:01.154950 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.155263 kubelet[3389]: E1123 23:23:01.155219 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.155263 kubelet[3389]: W1123 23:23:01.155230 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.155263 kubelet[3389]: E1123 23:23:01.155239 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.155775 kubelet[3389]: E1123 23:23:01.155762 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.156131 kubelet[3389]: W1123 23:23:01.155821 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.156131 kubelet[3389]: E1123 23:23:01.155834 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.156375 kubelet[3389]: E1123 23:23:01.156359 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.156375 kubelet[3389]: W1123 23:23:01.156372 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.156727 kubelet[3389]: E1123 23:23:01.156383 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.156816 kubelet[3389]: E1123 23:23:01.156801 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.156816 kubelet[3389]: W1123 23:23:01.156814 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.156865 kubelet[3389]: E1123 23:23:01.156828 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.156989 kubelet[3389]: E1123 23:23:01.156965 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.156989 kubelet[3389]: W1123 23:23:01.156988 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.157037 kubelet[3389]: E1123 23:23:01.156998 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.157169 kubelet[3389]: E1123 23:23:01.157156 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.157169 kubelet[3389]: W1123 23:23:01.157166 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.157222 kubelet[3389]: E1123 23:23:01.157174 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.157322 kubelet[3389]: E1123 23:23:01.157310 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.157357 kubelet[3389]: W1123 23:23:01.157328 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.157357 kubelet[3389]: E1123 23:23:01.157337 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.157459 kubelet[3389]: E1123 23:23:01.157447 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.157459 kubelet[3389]: W1123 23:23:01.157457 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.157506 kubelet[3389]: E1123 23:23:01.157464 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.157571 kubelet[3389]: E1123 23:23:01.157560 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.157571 kubelet[3389]: W1123 23:23:01.157567 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.157605 kubelet[3389]: E1123 23:23:01.157583 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.157689 kubelet[3389]: E1123 23:23:01.157678 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.157689 kubelet[3389]: W1123 23:23:01.157686 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.157731 kubelet[3389]: E1123 23:23:01.157691 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.157849 kubelet[3389]: E1123 23:23:01.157837 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.157849 kubelet[3389]: W1123 23:23:01.157846 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.157893 kubelet[3389]: E1123 23:23:01.157852 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.158004 kubelet[3389]: E1123 23:23:01.157993 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.158004 kubelet[3389]: W1123 23:23:01.158001 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.158054 kubelet[3389]: E1123 23:23:01.158006 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.158112 kubelet[3389]: E1123 23:23:01.158102 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.158112 kubelet[3389]: W1123 23:23:01.158109 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.158152 kubelet[3389]: E1123 23:23:01.158116 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.158245 kubelet[3389]: E1123 23:23:01.158233 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.158245 kubelet[3389]: W1123 23:23:01.158242 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.158349 kubelet[3389]: E1123 23:23:01.158249 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.158369 kubelet[3389]: E1123 23:23:01.158363 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.158385 kubelet[3389]: W1123 23:23:01.158369 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.158385 kubelet[3389]: E1123 23:23:01.158376 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.158502 kubelet[3389]: E1123 23:23:01.158490 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.158502 kubelet[3389]: W1123 23:23:01.158498 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.158502 kubelet[3389]: E1123 23:23:01.158503 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.158596 kubelet[3389]: E1123 23:23:01.158584 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.158596 kubelet[3389]: W1123 23:23:01.158592 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.158627 kubelet[3389]: E1123 23:23:01.158596 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.168956 kubelet[3389]: E1123 23:23:01.168919 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.168956 kubelet[3389]: W1123 23:23:01.168931 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.168956 kubelet[3389]: E1123 23:23:01.168941 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.169093 kubelet[3389]: I1123 23:23:01.169076 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/55b22252-8f3d-48bd-88cb-ddab5e9d791f-varrun\") pod \"csi-node-driver-7tjz6\" (UID: \"55b22252-8f3d-48bd-88cb-ddab5e9d791f\") " pod="calico-system/csi-node-driver-7tjz6" Nov 23 23:23:01.169267 kubelet[3389]: E1123 23:23:01.169246 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.169267 kubelet[3389]: W1123 23:23:01.169262 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.169342 kubelet[3389]: E1123 23:23:01.169275 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.169411 kubelet[3389]: E1123 23:23:01.169399 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.169411 kubelet[3389]: W1123 23:23:01.169407 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.169463 kubelet[3389]: E1123 23:23:01.169417 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.169587 kubelet[3389]: E1123 23:23:01.169574 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.169587 kubelet[3389]: W1123 23:23:01.169583 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.169629 kubelet[3389]: E1123 23:23:01.169590 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.169629 kubelet[3389]: I1123 23:23:01.169608 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfz2v\" (UniqueName: \"kubernetes.io/projected/55b22252-8f3d-48bd-88cb-ddab5e9d791f-kube-api-access-sfz2v\") pod \"csi-node-driver-7tjz6\" (UID: \"55b22252-8f3d-48bd-88cb-ddab5e9d791f\") " pod="calico-system/csi-node-driver-7tjz6" Nov 23 23:23:01.169737 kubelet[3389]: E1123 23:23:01.169725 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.169737 kubelet[3389]: W1123 23:23:01.169734 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.169786 kubelet[3389]: E1123 23:23:01.169747 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.169786 kubelet[3389]: I1123 23:23:01.169758 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/55b22252-8f3d-48bd-88cb-ddab5e9d791f-kubelet-dir\") pod \"csi-node-driver-7tjz6\" (UID: \"55b22252-8f3d-48bd-88cb-ddab5e9d791f\") " pod="calico-system/csi-node-driver-7tjz6" Nov 23 23:23:01.169884 kubelet[3389]: E1123 23:23:01.169872 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.169884 kubelet[3389]: W1123 23:23:01.169880 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.169937 kubelet[3389]: E1123 23:23:01.169894 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.169937 kubelet[3389]: I1123 23:23:01.169904 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/55b22252-8f3d-48bd-88cb-ddab5e9d791f-registration-dir\") pod \"csi-node-driver-7tjz6\" (UID: \"55b22252-8f3d-48bd-88cb-ddab5e9d791f\") " pod="calico-system/csi-node-driver-7tjz6" Nov 23 23:23:01.170017 kubelet[3389]: E1123 23:23:01.170006 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.170046 kubelet[3389]: W1123 23:23:01.170015 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.170046 kubelet[3389]: E1123 23:23:01.170032 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.170046 kubelet[3389]: I1123 23:23:01.170042 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/55b22252-8f3d-48bd-88cb-ddab5e9d791f-socket-dir\") pod \"csi-node-driver-7tjz6\" (UID: \"55b22252-8f3d-48bd-88cb-ddab5e9d791f\") " pod="calico-system/csi-node-driver-7tjz6" Nov 23 23:23:01.170158 kubelet[3389]: E1123 23:23:01.170146 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.170158 kubelet[3389]: W1123 23:23:01.170155 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.170205 kubelet[3389]: E1123 23:23:01.170171 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.170273 kubelet[3389]: E1123 23:23:01.170261 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.170273 kubelet[3389]: W1123 23:23:01.170269 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.170273 kubelet[3389]: E1123 23:23:01.170276 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.170502 kubelet[3389]: E1123 23:23:01.170489 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.170577 kubelet[3389]: W1123 23:23:01.170548 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.170638 kubelet[3389]: E1123 23:23:01.170628 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.170900 kubelet[3389]: E1123 23:23:01.170807 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.170900 kubelet[3389]: W1123 23:23:01.170818 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.170900 kubelet[3389]: E1123 23:23:01.170832 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.171049 kubelet[3389]: E1123 23:23:01.171038 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.171090 kubelet[3389]: W1123 23:23:01.171082 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.171134 kubelet[3389]: E1123 23:23:01.171126 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.171309 kubelet[3389]: E1123 23:23:01.171277 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.171309 kubelet[3389]: W1123 23:23:01.171289 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.171381 kubelet[3389]: E1123 23:23:01.171322 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.171436 kubelet[3389]: E1123 23:23:01.171421 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.171436 kubelet[3389]: W1123 23:23:01.171430 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.171436 kubelet[3389]: E1123 23:23:01.171436 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.171604 kubelet[3389]: E1123 23:23:01.171591 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.171604 kubelet[3389]: W1123 23:23:01.171600 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.171654 kubelet[3389]: E1123 23:23:01.171606 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.255250 containerd[1900]: time="2025-11-23T23:23:01.254628175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lrk2z,Uid:b7260e66-02c6-4ba5-9e9f-8184e7b83e36,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:01.271575 kubelet[3389]: E1123 23:23:01.271475 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.271575 kubelet[3389]: W1123 23:23:01.271494 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.271575 kubelet[3389]: E1123 23:23:01.271511 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.271952 kubelet[3389]: E1123 23:23:01.271885 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.271952 kubelet[3389]: W1123 23:23:01.271896 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.271952 kubelet[3389]: E1123 23:23:01.271911 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.272240 kubelet[3389]: E1123 23:23:01.272229 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.272366 kubelet[3389]: W1123 23:23:01.272281 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.272482 kubelet[3389]: E1123 23:23:01.272411 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.272482 kubelet[3389]: E1123 23:23:01.272477 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.272560 kubelet[3389]: W1123 23:23:01.272488 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.272560 kubelet[3389]: E1123 23:23:01.272504 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.272683 kubelet[3389]: E1123 23:23:01.272614 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.272683 kubelet[3389]: W1123 23:23:01.272620 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.272683 kubelet[3389]: E1123 23:23:01.272629 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.272990 kubelet[3389]: E1123 23:23:01.272928 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.272990 kubelet[3389]: W1123 23:23:01.272940 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.272990 kubelet[3389]: E1123 23:23:01.272955 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.273265 kubelet[3389]: E1123 23:23:01.273177 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.273265 kubelet[3389]: W1123 23:23:01.273186 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.273265 kubelet[3389]: E1123 23:23:01.273200 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.273503 kubelet[3389]: E1123 23:23:01.273468 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.273503 kubelet[3389]: W1123 23:23:01.273479 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.273614 kubelet[3389]: E1123 23:23:01.273555 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.273636 kubelet[3389]: E1123 23:23:01.273618 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.273636 kubelet[3389]: W1123 23:23:01.273628 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.273786 kubelet[3389]: E1123 23:23:01.273637 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.273977 kubelet[3389]: E1123 23:23:01.273948 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.273977 kubelet[3389]: W1123 23:23:01.273960 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.274106 kubelet[3389]: E1123 23:23:01.274052 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.274277 kubelet[3389]: E1123 23:23:01.274255 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.274277 kubelet[3389]: W1123 23:23:01.274265 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.275008 kubelet[3389]: E1123 23:23:01.274331 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.275008 kubelet[3389]: E1123 23:23:01.274492 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.275008 kubelet[3389]: W1123 23:23:01.274499 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.275008 kubelet[3389]: E1123 23:23:01.274578 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.275008 kubelet[3389]: E1123 23:23:01.274613 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.275008 kubelet[3389]: W1123 23:23:01.274617 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.275008 kubelet[3389]: E1123 23:23:01.274636 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.275008 kubelet[3389]: E1123 23:23:01.274718 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.275008 kubelet[3389]: W1123 23:23:01.274723 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.275008 kubelet[3389]: E1123 23:23:01.274730 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.275145 kubelet[3389]: E1123 23:23:01.274830 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.275145 kubelet[3389]: W1123 23:23:01.274837 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.275339 kubelet[3389]: E1123 23:23:01.275188 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.275339 kubelet[3389]: E1123 23:23:01.275235 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.275339 kubelet[3389]: W1123 23:23:01.275240 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.275339 kubelet[3389]: E1123 23:23:01.275264 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.275549 kubelet[3389]: E1123 23:23:01.275406 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.275549 kubelet[3389]: W1123 23:23:01.275414 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.275549 kubelet[3389]: E1123 23:23:01.275428 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.275797 kubelet[3389]: E1123 23:23:01.275713 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.275797 kubelet[3389]: W1123 23:23:01.275725 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.275797 kubelet[3389]: E1123 23:23:01.275738 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.276009 kubelet[3389]: E1123 23:23:01.276000 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.276148 kubelet[3389]: W1123 23:23:01.276067 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.276148 kubelet[3389]: E1123 23:23:01.276091 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.276395 kubelet[3389]: E1123 23:23:01.276383 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.276585 kubelet[3389]: W1123 23:23:01.276519 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.276710 kubelet[3389]: E1123 23:23:01.276703 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.276802 kubelet[3389]: W1123 23:23:01.276750 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.277353 kubelet[3389]: E1123 23:23:01.277245 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.277353 kubelet[3389]: W1123 23:23:01.277256 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.277353 kubelet[3389]: E1123 23:23:01.277266 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.277654 kubelet[3389]: E1123 23:23:01.277555 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.277654 kubelet[3389]: W1123 23:23:01.277567 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.277654 kubelet[3389]: E1123 23:23:01.277577 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.277971 kubelet[3389]: E1123 23:23:01.277960 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.278090 kubelet[3389]: W1123 23:23:01.278054 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.278090 kubelet[3389]: E1123 23:23:01.278071 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.278261 kubelet[3389]: E1123 23:23:01.277960 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.278323 kubelet[3389]: E1123 23:23:01.277966 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.279156 kubelet[3389]: E1123 23:23:01.279118 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.279156 kubelet[3389]: W1123 23:23:01.279129 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.279156 kubelet[3389]: E1123 23:23:01.279138 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.289408 kubelet[3389]: E1123 23:23:01.289354 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:01.289408 kubelet[3389]: W1123 23:23:01.289406 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:01.289482 kubelet[3389]: E1123 23:23:01.289420 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:01.301990 containerd[1900]: time="2025-11-23T23:23:01.301650702Z" level=info msg="connecting to shim 79d4c3639f09830fa009882c9bac7279166d8576b6c5f689c4eba6782f9b187c" address="unix:///run/containerd/s/610e43939edb5eda5a92e74c31243a103047b30d7079d82aa18fe9b224933277" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:01.320428 systemd[1]: Started cri-containerd-79d4c3639f09830fa009882c9bac7279166d8576b6c5f689c4eba6782f9b187c.scope - libcontainer container 79d4c3639f09830fa009882c9bac7279166d8576b6c5f689c4eba6782f9b187c. Nov 23 23:23:01.340068 containerd[1900]: time="2025-11-23T23:23:01.340034393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lrk2z,Uid:b7260e66-02c6-4ba5-9e9f-8184e7b83e36,Namespace:calico-system,Attempt:0,} returns sandbox id \"79d4c3639f09830fa009882c9bac7279166d8576b6c5f689c4eba6782f9b187c\"" Nov 23 23:23:02.414940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3090689892.mount: Deactivated successfully. Nov 23 23:23:02.636022 kubelet[3389]: E1123 23:23:02.635553 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:23:03.625238 containerd[1900]: time="2025-11-23T23:23:03.625194141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:03.628341 containerd[1900]: time="2025-11-23T23:23:03.628318861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 23 23:23:03.631703 containerd[1900]: time="2025-11-23T23:23:03.631679733Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:03.638561 containerd[1900]: time="2025-11-23T23:23:03.638533705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:03.639804 containerd[1900]: time="2025-11-23T23:23:03.639775896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.487571288s" Nov 23 23:23:03.639804 containerd[1900]: time="2025-11-23T23:23:03.639802200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 23:23:03.642055 containerd[1900]: time="2025-11-23T23:23:03.642031269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 23:23:03.658422 containerd[1900]: time="2025-11-23T23:23:03.658389672Z" level=info msg="CreateContainer within sandbox \"492624059d9105d67e82c249ec8717b637b9afe3f6451becc312d3bc9324f6c0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 23:23:03.680469 containerd[1900]: time="2025-11-23T23:23:03.680436425Z" level=info msg="Container 75c1e664db3fa4f23a5bf02af2e1f8d5cae2f8969edffab003dc56f7c65f92c6: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:03.700991 containerd[1900]: time="2025-11-23T23:23:03.700901578Z" level=info msg="CreateContainer within sandbox \"492624059d9105d67e82c249ec8717b637b9afe3f6451becc312d3bc9324f6c0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"75c1e664db3fa4f23a5bf02af2e1f8d5cae2f8969edffab003dc56f7c65f92c6\"" Nov 23 23:23:03.702178 containerd[1900]: time="2025-11-23T23:23:03.702155985Z" level=info msg="StartContainer for \"75c1e664db3fa4f23a5bf02af2e1f8d5cae2f8969edffab003dc56f7c65f92c6\"" Nov 23 23:23:03.703165 containerd[1900]: time="2025-11-23T23:23:03.703126207Z" level=info msg="connecting to shim 75c1e664db3fa4f23a5bf02af2e1f8d5cae2f8969edffab003dc56f7c65f92c6" address="unix:///run/containerd/s/c5351ce2567d0ec4ca7b0214c4d2103431b74329aab3a193f8d800c9999e5887" protocol=ttrpc version=3 Nov 23 23:23:03.720434 systemd[1]: Started cri-containerd-75c1e664db3fa4f23a5bf02af2e1f8d5cae2f8969edffab003dc56f7c65f92c6.scope - libcontainer container 75c1e664db3fa4f23a5bf02af2e1f8d5cae2f8969edffab003dc56f7c65f92c6. Nov 23 23:23:03.758279 containerd[1900]: time="2025-11-23T23:23:03.758244056Z" level=info msg="StartContainer for \"75c1e664db3fa4f23a5bf02af2e1f8d5cae2f8969edffab003dc56f7c65f92c6\" returns successfully" Nov 23 23:23:04.635549 kubelet[3389]: E1123 23:23:04.635436 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:23:04.737725 kubelet[3389]: I1123 23:23:04.737629 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57cb98656-9nv9g" podStartSLOduration=2.246811044 podStartE2EDuration="4.737615017s" podCreationTimestamp="2025-11-23 23:23:00 +0000 UTC" firstStartedPulling="2025-11-23 23:23:01.150748546 +0000 UTC m=+21.574764413" lastFinishedPulling="2025-11-23 23:23:03.641552519 +0000 UTC m=+24.065568386" observedRunningTime="2025-11-23 23:23:04.736186741 +0000 UTC m=+25.160202616" watchObservedRunningTime="2025-11-23 23:23:04.737615017 +0000 UTC m=+25.161630884" Nov 23 23:23:04.779602 kubelet[3389]: E1123 23:23:04.779578 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.779602 kubelet[3389]: W1123 23:23:04.779597 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.779723 kubelet[3389]: E1123 23:23:04.779612 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.779742 kubelet[3389]: E1123 23:23:04.779729 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.779800 kubelet[3389]: W1123 23:23:04.779736 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.779820 kubelet[3389]: E1123 23:23:04.779796 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.779956 kubelet[3389]: E1123 23:23:04.779942 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.779956 kubelet[3389]: W1123 23:23:04.779951 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.780006 kubelet[3389]: E1123 23:23:04.779958 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.780091 kubelet[3389]: E1123 23:23:04.780078 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.780091 kubelet[3389]: W1123 23:23:04.780087 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.780119 kubelet[3389]: E1123 23:23:04.780094 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.780246 kubelet[3389]: E1123 23:23:04.780232 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.780246 kubelet[3389]: W1123 23:23:04.780241 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.780308 kubelet[3389]: E1123 23:23:04.780248 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.780384 kubelet[3389]: E1123 23:23:04.780370 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.780384 kubelet[3389]: W1123 23:23:04.780379 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.780435 kubelet[3389]: E1123 23:23:04.780386 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.780510 kubelet[3389]: E1123 23:23:04.780497 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.780510 kubelet[3389]: W1123 23:23:04.780506 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.780553 kubelet[3389]: E1123 23:23:04.780512 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.780618 kubelet[3389]: E1123 23:23:04.780606 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.780637 kubelet[3389]: W1123 23:23:04.780622 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.780637 kubelet[3389]: E1123 23:23:04.780628 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.780732 kubelet[3389]: E1123 23:23:04.780718 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.780732 kubelet[3389]: W1123 23:23:04.780727 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.780807 kubelet[3389]: E1123 23:23:04.780733 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.780890 kubelet[3389]: E1123 23:23:04.780867 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.780890 kubelet[3389]: W1123 23:23:04.780887 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.780928 kubelet[3389]: E1123 23:23:04.780896 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.781010 kubelet[3389]: E1123 23:23:04.780996 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.781010 kubelet[3389]: W1123 23:23:04.781006 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.781044 kubelet[3389]: E1123 23:23:04.781012 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.781119 kubelet[3389]: E1123 23:23:04.781107 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.781119 kubelet[3389]: W1123 23:23:04.781116 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.781146 kubelet[3389]: E1123 23:23:04.781121 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.781225 kubelet[3389]: E1123 23:23:04.781215 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.781225 kubelet[3389]: W1123 23:23:04.781223 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.781260 kubelet[3389]: E1123 23:23:04.781229 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.781337 kubelet[3389]: E1123 23:23:04.781326 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.781337 kubelet[3389]: W1123 23:23:04.781334 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.781368 kubelet[3389]: E1123 23:23:04.781340 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.781435 kubelet[3389]: E1123 23:23:04.781425 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.781435 kubelet[3389]: W1123 23:23:04.781434 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.781465 kubelet[3389]: E1123 23:23:04.781439 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.796810 kubelet[3389]: E1123 23:23:04.796786 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.796810 kubelet[3389]: W1123 23:23:04.796803 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.796810 kubelet[3389]: E1123 23:23:04.796815 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.797100 kubelet[3389]: E1123 23:23:04.796956 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.797100 kubelet[3389]: W1123 23:23:04.796969 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.797100 kubelet[3389]: E1123 23:23:04.796977 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.797342 kubelet[3389]: E1123 23:23:04.797324 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.797342 kubelet[3389]: W1123 23:23:04.797339 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.797399 kubelet[3389]: E1123 23:23:04.797356 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.797535 kubelet[3389]: E1123 23:23:04.797518 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.797568 kubelet[3389]: W1123 23:23:04.797538 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.797568 kubelet[3389]: E1123 23:23:04.797550 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.797713 kubelet[3389]: E1123 23:23:04.797701 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.797713 kubelet[3389]: W1123 23:23:04.797710 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.797763 kubelet[3389]: E1123 23:23:04.797723 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.797851 kubelet[3389]: E1123 23:23:04.797838 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.797851 kubelet[3389]: W1123 23:23:04.797847 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.797898 kubelet[3389]: E1123 23:23:04.797859 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.798013 kubelet[3389]: E1123 23:23:04.798001 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.798013 kubelet[3389]: W1123 23:23:04.798010 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.798055 kubelet[3389]: E1123 23:23:04.798020 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.798167 kubelet[3389]: E1123 23:23:04.798155 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.798167 kubelet[3389]: W1123 23:23:04.798164 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.798358 kubelet[3389]: E1123 23:23:04.798171 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.798442 kubelet[3389]: E1123 23:23:04.798423 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.798442 kubelet[3389]: W1123 23:23:04.798437 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.798477 kubelet[3389]: E1123 23:23:04.798447 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.798595 kubelet[3389]: E1123 23:23:04.798583 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.798595 kubelet[3389]: W1123 23:23:04.798592 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.798647 kubelet[3389]: E1123 23:23:04.798602 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.798720 kubelet[3389]: E1123 23:23:04.798708 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.798720 kubelet[3389]: W1123 23:23:04.798717 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.798808 kubelet[3389]: E1123 23:23:04.798767 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.798875 kubelet[3389]: E1123 23:23:04.798864 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.798875 kubelet[3389]: W1123 23:23:04.798872 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.798916 kubelet[3389]: E1123 23:23:04.798882 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.798999 kubelet[3389]: E1123 23:23:04.798985 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.798999 kubelet[3389]: W1123 23:23:04.798994 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.798999 kubelet[3389]: E1123 23:23:04.799002 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.799234 kubelet[3389]: E1123 23:23:04.799221 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.799288 kubelet[3389]: W1123 23:23:04.799277 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.799365 kubelet[3389]: E1123 23:23:04.799354 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.799539 kubelet[3389]: E1123 23:23:04.799517 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.799539 kubelet[3389]: W1123 23:23:04.799530 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.799539 kubelet[3389]: E1123 23:23:04.799539 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.799726 kubelet[3389]: E1123 23:23:04.799714 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.799726 kubelet[3389]: W1123 23:23:04.799722 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.799783 kubelet[3389]: E1123 23:23:04.799735 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.799920 kubelet[3389]: E1123 23:23:04.799907 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.799920 kubelet[3389]: W1123 23:23:04.799917 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.799964 kubelet[3389]: E1123 23:23:04.799924 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:04.800309 kubelet[3389]: E1123 23:23:04.800280 3389 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:23:04.800352 kubelet[3389]: W1123 23:23:04.800334 3389 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:23:04.800352 kubelet[3389]: E1123 23:23:04.800346 3389 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:23:05.262330 containerd[1900]: time="2025-11-23T23:23:05.262153668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:05.265310 containerd[1900]: time="2025-11-23T23:23:05.265174329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 23 23:23:05.268758 containerd[1900]: time="2025-11-23T23:23:05.268685782Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:05.272594 containerd[1900]: time="2025-11-23T23:23:05.272563798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:05.272833 containerd[1900]: time="2025-11-23T23:23:05.272803045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.630746063s" Nov 23 23:23:05.272833 containerd[1900]: time="2025-11-23T23:23:05.272829102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 23:23:05.275744 containerd[1900]: time="2025-11-23T23:23:05.275717047Z" level=info msg="CreateContainer within sandbox \"79d4c3639f09830fa009882c9bac7279166d8576b6c5f689c4eba6782f9b187c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 23:23:05.296969 containerd[1900]: time="2025-11-23T23:23:05.296336517Z" level=info msg="Container 3cad3085ba549b33bd40c7dd81e7f081bf28ac1690e9b7f397a9c5295613f5f3: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:05.299053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916308749.mount: Deactivated successfully. Nov 23 23:23:05.316395 containerd[1900]: time="2025-11-23T23:23:05.316323448Z" level=info msg="CreateContainer within sandbox \"79d4c3639f09830fa009882c9bac7279166d8576b6c5f689c4eba6782f9b187c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3cad3085ba549b33bd40c7dd81e7f081bf28ac1690e9b7f397a9c5295613f5f3\"" Nov 23 23:23:05.317203 containerd[1900]: time="2025-11-23T23:23:05.317179538Z" level=info msg="StartContainer for \"3cad3085ba549b33bd40c7dd81e7f081bf28ac1690e9b7f397a9c5295613f5f3\"" Nov 23 23:23:05.318289 containerd[1900]: time="2025-11-23T23:23:05.318265884Z" level=info msg="connecting to shim 3cad3085ba549b33bd40c7dd81e7f081bf28ac1690e9b7f397a9c5295613f5f3" address="unix:///run/containerd/s/610e43939edb5eda5a92e74c31243a103047b30d7079d82aa18fe9b224933277" protocol=ttrpc version=3 Nov 23 23:23:05.338530 systemd[1]: Started cri-containerd-3cad3085ba549b33bd40c7dd81e7f081bf28ac1690e9b7f397a9c5295613f5f3.scope - libcontainer container 3cad3085ba549b33bd40c7dd81e7f081bf28ac1690e9b7f397a9c5295613f5f3. Nov 23 23:23:05.395116 containerd[1900]: time="2025-11-23T23:23:05.395056883Z" level=info msg="StartContainer for \"3cad3085ba549b33bd40c7dd81e7f081bf28ac1690e9b7f397a9c5295613f5f3\" returns successfully" Nov 23 23:23:05.402047 systemd[1]: cri-containerd-3cad3085ba549b33bd40c7dd81e7f081bf28ac1690e9b7f397a9c5295613f5f3.scope: Deactivated successfully. Nov 23 23:23:05.410797 containerd[1900]: time="2025-11-23T23:23:05.410751865Z" level=info msg="received container exit event container_id:\"3cad3085ba549b33bd40c7dd81e7f081bf28ac1690e9b7f397a9c5295613f5f3\" id:\"3cad3085ba549b33bd40c7dd81e7f081bf28ac1690e9b7f397a9c5295613f5f3\" pid:4062 exited_at:{seconds:1763940185 nanos:410040139}" Nov 23 23:23:05.433096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cad3085ba549b33bd40c7dd81e7f081bf28ac1690e9b7f397a9c5295613f5f3-rootfs.mount: Deactivated successfully. Nov 23 23:23:06.635324 kubelet[3389]: E1123 23:23:06.635251 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:23:06.732745 containerd[1900]: time="2025-11-23T23:23:06.732691215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 23:23:08.635032 kubelet[3389]: E1123 23:23:08.634950 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:23:09.995756 containerd[1900]: time="2025-11-23T23:23:09.995716162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:09.998837 containerd[1900]: time="2025-11-23T23:23:09.998812419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 23:23:10.001742 containerd[1900]: time="2025-11-23T23:23:10.001717030Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:10.006318 containerd[1900]: time="2025-11-23T23:23:10.006286332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:10.006977 containerd[1900]: time="2025-11-23T23:23:10.006952129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.274201513s" Nov 23 23:23:10.007014 containerd[1900]: time="2025-11-23T23:23:10.006977954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 23:23:10.009796 containerd[1900]: time="2025-11-23T23:23:10.009772729Z" level=info msg="CreateContainer within sandbox \"79d4c3639f09830fa009882c9bac7279166d8576b6c5f689c4eba6782f9b187c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 23:23:10.031315 containerd[1900]: time="2025-11-23T23:23:10.031282010Z" level=info msg="Container 50a73c0ef03c6c5aa29eddd34b2464620ad7a99aa3a642c4c9fcf33d5b84bbd1: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:10.033759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196409635.mount: Deactivated successfully. Nov 23 23:23:10.054535 containerd[1900]: time="2025-11-23T23:23:10.054507112Z" level=info msg="CreateContainer within sandbox \"79d4c3639f09830fa009882c9bac7279166d8576b6c5f689c4eba6782f9b187c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"50a73c0ef03c6c5aa29eddd34b2464620ad7a99aa3a642c4c9fcf33d5b84bbd1\"" Nov 23 23:23:10.055462 containerd[1900]: time="2025-11-23T23:23:10.055418869Z" level=info msg="StartContainer for \"50a73c0ef03c6c5aa29eddd34b2464620ad7a99aa3a642c4c9fcf33d5b84bbd1\"" Nov 23 23:23:10.056434 containerd[1900]: time="2025-11-23T23:23:10.056410276Z" level=info msg="connecting to shim 50a73c0ef03c6c5aa29eddd34b2464620ad7a99aa3a642c4c9fcf33d5b84bbd1" address="unix:///run/containerd/s/610e43939edb5eda5a92e74c31243a103047b30d7079d82aa18fe9b224933277" protocol=ttrpc version=3 Nov 23 23:23:10.076417 systemd[1]: Started cri-containerd-50a73c0ef03c6c5aa29eddd34b2464620ad7a99aa3a642c4c9fcf33d5b84bbd1.scope - libcontainer container 50a73c0ef03c6c5aa29eddd34b2464620ad7a99aa3a642c4c9fcf33d5b84bbd1. Nov 23 23:23:10.136641 containerd[1900]: time="2025-11-23T23:23:10.136613943Z" level=info msg="StartContainer for \"50a73c0ef03c6c5aa29eddd34b2464620ad7a99aa3a642c4c9fcf33d5b84bbd1\" returns successfully" Nov 23 23:23:10.635580 kubelet[3389]: E1123 23:23:10.635533 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:23:11.251990 containerd[1900]: time="2025-11-23T23:23:11.251950220Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:23:11.253966 systemd[1]: cri-containerd-50a73c0ef03c6c5aa29eddd34b2464620ad7a99aa3a642c4c9fcf33d5b84bbd1.scope: Deactivated successfully. Nov 23 23:23:11.254675 systemd[1]: cri-containerd-50a73c0ef03c6c5aa29eddd34b2464620ad7a99aa3a642c4c9fcf33d5b84bbd1.scope: Consumed 313ms CPU time, 191.2M memory peak, 165.9M written to disk. Nov 23 23:23:11.256277 containerd[1900]: time="2025-11-23T23:23:11.256235162Z" level=info msg="received container exit event container_id:\"50a73c0ef03c6c5aa29eddd34b2464620ad7a99aa3a642c4c9fcf33d5b84bbd1\" id:\"50a73c0ef03c6c5aa29eddd34b2464620ad7a99aa3a642c4c9fcf33d5b84bbd1\" pid:4126 exited_at:{seconds:1763940191 nanos:256064861}" Nov 23 23:23:11.271108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50a73c0ef03c6c5aa29eddd34b2464620ad7a99aa3a642c4c9fcf33d5b84bbd1-rootfs.mount: Deactivated successfully. Nov 23 23:23:11.322859 kubelet[3389]: I1123 23:23:11.322827 3389 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 23 23:23:11.659610 kubelet[3389]: W1123 23:23:11.369943 3389 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4459.2.1-a-2a92a9cf5f" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459.2.1-a-2a92a9cf5f' and this object Nov 23 23:23:11.659610 kubelet[3389]: E1123 23:23:11.369977 3389 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4459.2.1-a-2a92a9cf5f\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459.2.1-a-2a92a9cf5f' and this object" logger="UnhandledError" Nov 23 23:23:11.659610 kubelet[3389]: W1123 23:23:11.370231 3389 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4459.2.1-a-2a92a9cf5f" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459.2.1-a-2a92a9cf5f' and this object Nov 23 23:23:11.659610 kubelet[3389]: E1123 23:23:11.370249 3389 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4459.2.1-a-2a92a9cf5f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459.2.1-a-2a92a9cf5f' and this object" logger="UnhandledError" Nov 23 23:23:11.659610 kubelet[3389]: W1123 23:23:11.370486 3389 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4459.2.1-a-2a92a9cf5f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459.2.1-a-2a92a9cf5f' and this object Nov 23 23:23:11.359394 systemd[1]: Created slice kubepods-besteffort-poda24d7295_0bba_4952_852f_37a344f80dea.slice - libcontainer container kubepods-besteffort-poda24d7295_0bba_4952_852f_37a344f80dea.slice. Nov 23 23:23:11.660637 kubelet[3389]: E1123 23:23:11.370512 3389 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4459.2.1-a-2a92a9cf5f\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459.2.1-a-2a92a9cf5f' and this object" logger="UnhandledError" Nov 23 23:23:11.660637 kubelet[3389]: I1123 23:23:11.439250 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af6e60cf-c463-457e-be42-88e0f43ba038-config\") pod \"goldmane-666569f655-tnnrs\" (UID: \"af6e60cf-c463-457e-be42-88e0f43ba038\") " pod="calico-system/goldmane-666569f655-tnnrs" Nov 23 23:23:11.660637 kubelet[3389]: I1123 23:23:11.439274 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af6e60cf-c463-457e-be42-88e0f43ba038-goldmane-ca-bundle\") pod \"goldmane-666569f655-tnnrs\" (UID: \"af6e60cf-c463-457e-be42-88e0f43ba038\") " pod="calico-system/goldmane-666569f655-tnnrs" Nov 23 23:23:11.660637 kubelet[3389]: I1123 23:23:11.439290 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c83ee34d-1893-4fb7-89f8-9378dfe640fb-config-volume\") pod \"coredns-668d6bf9bc-6mrr8\" (UID: \"c83ee34d-1893-4fb7-89f8-9378dfe640fb\") " pod="kube-system/coredns-668d6bf9bc-6mrr8" Nov 23 23:23:11.660637 kubelet[3389]: I1123 23:23:11.439347 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prb76\" (UniqueName: \"kubernetes.io/projected/121b3bf3-7703-467a-986e-3619eec56340-kube-api-access-prb76\") pod \"calico-kube-controllers-69658b8f65-kmrst\" (UID: \"121b3bf3-7703-467a-986e-3619eec56340\") " pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" Nov 23 23:23:11.367987 systemd[1]: Created slice kubepods-besteffort-pod967a6d88_39fe_4a70_87f6_08a7898b61d2.slice - libcontainer container kubepods-besteffort-pod967a6d88_39fe_4a70_87f6_08a7898b61d2.slice. Nov 23 23:23:11.660804 kubelet[3389]: I1123 23:23:11.439359 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/af6e60cf-c463-457e-be42-88e0f43ba038-goldmane-key-pair\") pod \"goldmane-666569f655-tnnrs\" (UID: \"af6e60cf-c463-457e-be42-88e0f43ba038\") " pod="calico-system/goldmane-666569f655-tnnrs" Nov 23 23:23:11.660804 kubelet[3389]: I1123 23:23:11.439369 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a24d7295-0bba-4952-852f-37a344f80dea-calico-apiserver-certs\") pod \"calico-apiserver-5f55996b-nzxhv\" (UID: \"a24d7295-0bba-4952-852f-37a344f80dea\") " pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" Nov 23 23:23:11.660804 kubelet[3389]: I1123 23:23:11.439409 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkzcw\" (UniqueName: \"kubernetes.io/projected/c83ee34d-1893-4fb7-89f8-9378dfe640fb-kube-api-access-kkzcw\") pod \"coredns-668d6bf9bc-6mrr8\" (UID: \"c83ee34d-1893-4fb7-89f8-9378dfe640fb\") " pod="kube-system/coredns-668d6bf9bc-6mrr8" Nov 23 23:23:11.660804 kubelet[3389]: I1123 23:23:11.439419 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxg8k\" (UniqueName: \"kubernetes.io/projected/af6e60cf-c463-457e-be42-88e0f43ba038-kube-api-access-xxg8k\") pod \"goldmane-666569f655-tnnrs\" (UID: \"af6e60cf-c463-457e-be42-88e0f43ba038\") " pod="calico-system/goldmane-666569f655-tnnrs" Nov 23 23:23:11.660804 kubelet[3389]: I1123 23:23:11.439431 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqlfq\" (UniqueName: \"kubernetes.io/projected/a2c796bc-ea26-4f69-bc19-822da4c56dfe-kube-api-access-gqlfq\") pod \"calico-apiserver-5f55996b-dw8vv\" (UID: \"a2c796bc-ea26-4f69-bc19-822da4c56dfe\") " pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" Nov 23 23:23:11.377253 systemd[1]: Created slice kubepods-burstable-pod35c155aa_09f5_410a_80b4_7376a7871f0d.slice - libcontainer container kubepods-burstable-pod35c155aa_09f5_410a_80b4_7376a7871f0d.slice. Nov 23 23:23:11.660931 kubelet[3389]: I1123 23:23:11.439442 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/862cceb0-6c7b-4371-aa49-25852268d3a1-calico-apiserver-certs\") pod \"calico-apiserver-6d6799dd75-7vqw6\" (UID: \"862cceb0-6c7b-4371-aa49-25852268d3a1\") " pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" Nov 23 23:23:11.660931 kubelet[3389]: I1123 23:23:11.439453 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nmjz\" (UniqueName: \"kubernetes.io/projected/862cceb0-6c7b-4371-aa49-25852268d3a1-kube-api-access-6nmjz\") pod \"calico-apiserver-6d6799dd75-7vqw6\" (UID: \"862cceb0-6c7b-4371-aa49-25852268d3a1\") " pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" Nov 23 23:23:11.660931 kubelet[3389]: I1123 23:23:11.439492 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35c155aa-09f5-410a-80b4-7376a7871f0d-config-volume\") pod \"coredns-668d6bf9bc-msbl7\" (UID: \"35c155aa-09f5-410a-80b4-7376a7871f0d\") " pod="kube-system/coredns-668d6bf9bc-msbl7" Nov 23 23:23:11.660931 kubelet[3389]: I1123 23:23:11.439503 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2hkj\" (UniqueName: \"kubernetes.io/projected/a24d7295-0bba-4952-852f-37a344f80dea-kube-api-access-h2hkj\") pod \"calico-apiserver-5f55996b-nzxhv\" (UID: \"a24d7295-0bba-4952-852f-37a344f80dea\") " pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" Nov 23 23:23:11.660931 kubelet[3389]: I1123 23:23:11.439518 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzw79\" (UniqueName: \"kubernetes.io/projected/35c155aa-09f5-410a-80b4-7376a7871f0d-kube-api-access-fzw79\") pod \"coredns-668d6bf9bc-msbl7\" (UID: \"35c155aa-09f5-410a-80b4-7376a7871f0d\") " pod="kube-system/coredns-668d6bf9bc-msbl7" Nov 23 23:23:11.384849 systemd[1]: Created slice kubepods-besteffort-pod121b3bf3_7703_467a_986e_3619eec56340.slice - libcontainer container kubepods-besteffort-pod121b3bf3_7703_467a_986e_3619eec56340.slice. Nov 23 23:23:11.661060 kubelet[3389]: I1123 23:23:11.439529 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/121b3bf3-7703-467a-986e-3619eec56340-tigera-ca-bundle\") pod \"calico-kube-controllers-69658b8f65-kmrst\" (UID: \"121b3bf3-7703-467a-986e-3619eec56340\") " pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" Nov 23 23:23:11.661060 kubelet[3389]: I1123 23:23:11.439560 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a2c796bc-ea26-4f69-bc19-822da4c56dfe-calico-apiserver-certs\") pod \"calico-apiserver-5f55996b-dw8vv\" (UID: \"a2c796bc-ea26-4f69-bc19-822da4c56dfe\") " pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" Nov 23 23:23:11.661060 kubelet[3389]: I1123 23:23:11.439577 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967a6d88-39fe-4a70-87f6-08a7898b61d2-whisker-ca-bundle\") pod \"whisker-69dccc4cc6-s24qf\" (UID: \"967a6d88-39fe-4a70-87f6-08a7898b61d2\") " pod="calico-system/whisker-69dccc4cc6-s24qf" Nov 23 23:23:11.661060 kubelet[3389]: I1123 23:23:11.439586 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2mnr\" (UniqueName: \"kubernetes.io/projected/967a6d88-39fe-4a70-87f6-08a7898b61d2-kube-api-access-t2mnr\") pod \"whisker-69dccc4cc6-s24qf\" (UID: \"967a6d88-39fe-4a70-87f6-08a7898b61d2\") " pod="calico-system/whisker-69dccc4cc6-s24qf" Nov 23 23:23:11.661060 kubelet[3389]: I1123 23:23:11.439597 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/967a6d88-39fe-4a70-87f6-08a7898b61d2-whisker-backend-key-pair\") pod \"whisker-69dccc4cc6-s24qf\" (UID: \"967a6d88-39fe-4a70-87f6-08a7898b61d2\") " pod="calico-system/whisker-69dccc4cc6-s24qf" Nov 23 23:23:11.391216 systemd[1]: Created slice kubepods-burstable-podc83ee34d_1893_4fb7_89f8_9378dfe640fb.slice - libcontainer container kubepods-burstable-podc83ee34d_1893_4fb7_89f8_9378dfe640fb.slice. Nov 23 23:23:11.396135 systemd[1]: Created slice kubepods-besteffort-pod862cceb0_6c7b_4371_aa49_25852268d3a1.slice - libcontainer container kubepods-besteffort-pod862cceb0_6c7b_4371_aa49_25852268d3a1.slice. Nov 23 23:23:11.400556 systemd[1]: Created slice kubepods-besteffort-podaf6e60cf_c463_457e_be42_88e0f43ba038.slice - libcontainer container kubepods-besteffort-podaf6e60cf_c463_457e_be42_88e0f43ba038.slice. Nov 23 23:23:11.404861 systemd[1]: Created slice kubepods-besteffort-poda2c796bc_ea26_4f69_bc19_822da4c56dfe.slice - libcontainer container kubepods-besteffort-poda2c796bc_ea26_4f69_bc19_822da4c56dfe.slice. Nov 23 23:23:11.963085 containerd[1900]: time="2025-11-23T23:23:11.962979706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f55996b-nzxhv,Uid:a24d7295-0bba-4952-852f-37a344f80dea,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:23:11.966687 containerd[1900]: time="2025-11-23T23:23:11.966555882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tnnrs,Uid:af6e60cf-c463-457e-be42-88e0f43ba038,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:11.975543 containerd[1900]: time="2025-11-23T23:23:11.975502017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69658b8f65-kmrst,Uid:121b3bf3-7703-467a-986e-3619eec56340,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:11.975645 containerd[1900]: time="2025-11-23T23:23:11.975627037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d6799dd75-7vqw6,Uid:862cceb0-6c7b-4371-aa49-25852268d3a1,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:23:11.984332 containerd[1900]: time="2025-11-23T23:23:11.984289228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f55996b-dw8vv,Uid:a2c796bc-ea26-4f69-bc19-822da4c56dfe,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:23:12.229840 containerd[1900]: time="2025-11-23T23:23:12.229637155Z" level=error msg="Failed to destroy network for sandbox \"ea784f6e3a2066b6d34b29584eaa41253d96852494f72672db83c65db4ad89a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.233882 containerd[1900]: time="2025-11-23T23:23:12.233704594Z" level=error msg="Failed to destroy network for sandbox \"50f28ff8ddb61960d9cc4318833dc768f8e0fd13728316fbd04b2343bed23e5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.235206 containerd[1900]: time="2025-11-23T23:23:12.235129366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f55996b-nzxhv,Uid:a24d7295-0bba-4952-852f-37a344f80dea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea784f6e3a2066b6d34b29584eaa41253d96852494f72672db83c65db4ad89a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.235472 kubelet[3389]: E1123 23:23:12.235371 3389 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea784f6e3a2066b6d34b29584eaa41253d96852494f72672db83c65db4ad89a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.235472 kubelet[3389]: E1123 23:23:12.235451 3389 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea784f6e3a2066b6d34b29584eaa41253d96852494f72672db83c65db4ad89a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" Nov 23 23:23:12.235472 kubelet[3389]: E1123 23:23:12.235467 3389 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea784f6e3a2066b6d34b29584eaa41253d96852494f72672db83c65db4ad89a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" Nov 23 23:23:12.235548 kubelet[3389]: E1123 23:23:12.235520 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f55996b-nzxhv_calico-apiserver(a24d7295-0bba-4952-852f-37a344f80dea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f55996b-nzxhv_calico-apiserver(a24d7295-0bba-4952-852f-37a344f80dea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea784f6e3a2066b6d34b29584eaa41253d96852494f72672db83c65db4ad89a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:23:12.237862 containerd[1900]: time="2025-11-23T23:23:12.237775753Z" level=error msg="Failed to destroy network for sandbox \"a4573e1c0404c6f690826118cc1902bca3d699c1cdac4aa8a25a2df7695df830\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.239061 containerd[1900]: time="2025-11-23T23:23:12.238933005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tnnrs,Uid:af6e60cf-c463-457e-be42-88e0f43ba038,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"50f28ff8ddb61960d9cc4318833dc768f8e0fd13728316fbd04b2343bed23e5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.239540 kubelet[3389]: E1123 23:23:12.239346 3389 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50f28ff8ddb61960d9cc4318833dc768f8e0fd13728316fbd04b2343bed23e5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.239540 kubelet[3389]: E1123 23:23:12.239381 3389 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50f28ff8ddb61960d9cc4318833dc768f8e0fd13728316fbd04b2343bed23e5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-tnnrs" Nov 23 23:23:12.239540 kubelet[3389]: E1123 23:23:12.239394 3389 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50f28ff8ddb61960d9cc4318833dc768f8e0fd13728316fbd04b2343bed23e5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-tnnrs" Nov 23 23:23:12.239641 kubelet[3389]: E1123 23:23:12.239425 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-tnnrs_calico-system(af6e60cf-c463-457e-be42-88e0f43ba038)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-tnnrs_calico-system(af6e60cf-c463-457e-be42-88e0f43ba038)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50f28ff8ddb61960d9cc4318833dc768f8e0fd13728316fbd04b2343bed23e5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:23:12.242893 containerd[1900]: time="2025-11-23T23:23:12.242364592Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69658b8f65-kmrst,Uid:121b3bf3-7703-467a-986e-3619eec56340,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4573e1c0404c6f690826118cc1902bca3d699c1cdac4aa8a25a2df7695df830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.242975 kubelet[3389]: E1123 23:23:12.242491 3389 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4573e1c0404c6f690826118cc1902bca3d699c1cdac4aa8a25a2df7695df830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.242975 kubelet[3389]: E1123 23:23:12.242525 3389 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4573e1c0404c6f690826118cc1902bca3d699c1cdac4aa8a25a2df7695df830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" Nov 23 23:23:12.242975 kubelet[3389]: E1123 23:23:12.242537 3389 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4573e1c0404c6f690826118cc1902bca3d699c1cdac4aa8a25a2df7695df830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" Nov 23 23:23:12.243043 kubelet[3389]: E1123 23:23:12.242560 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69658b8f65-kmrst_calico-system(121b3bf3-7703-467a-986e-3619eec56340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69658b8f65-kmrst_calico-system(121b3bf3-7703-467a-986e-3619eec56340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4573e1c0404c6f690826118cc1902bca3d699c1cdac4aa8a25a2df7695df830\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:23:12.253599 containerd[1900]: time="2025-11-23T23:23:12.253564359Z" level=error msg="Failed to destroy network for sandbox \"6e019c9a618f7f9b9ebdcc8f7c9c331b516d8c0c92b624aa07ca20c2af912b0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.254068 containerd[1900]: time="2025-11-23T23:23:12.253789926Z" level=error msg="Failed to destroy network for sandbox \"9741f3e1063b6ed8085469704681cc626645346e1f432d97cf669e55bc4c22b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.258176 containerd[1900]: time="2025-11-23T23:23:12.258141318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d6799dd75-7vqw6,Uid:862cceb0-6c7b-4371-aa49-25852268d3a1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e019c9a618f7f9b9ebdcc8f7c9c331b516d8c0c92b624aa07ca20c2af912b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.258370 kubelet[3389]: E1123 23:23:12.258344 3389 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e019c9a618f7f9b9ebdcc8f7c9c331b516d8c0c92b624aa07ca20c2af912b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.258407 kubelet[3389]: E1123 23:23:12.258378 3389 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e019c9a618f7f9b9ebdcc8f7c9c331b516d8c0c92b624aa07ca20c2af912b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" Nov 23 23:23:12.258407 kubelet[3389]: E1123 23:23:12.258391 3389 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e019c9a618f7f9b9ebdcc8f7c9c331b516d8c0c92b624aa07ca20c2af912b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" Nov 23 23:23:12.258484 kubelet[3389]: E1123 23:23:12.258460 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d6799dd75-7vqw6_calico-apiserver(862cceb0-6c7b-4371-aa49-25852268d3a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d6799dd75-7vqw6_calico-apiserver(862cceb0-6c7b-4371-aa49-25852268d3a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e019c9a618f7f9b9ebdcc8f7c9c331b516d8c0c92b624aa07ca20c2af912b0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:23:12.261478 containerd[1900]: time="2025-11-23T23:23:12.261424412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f55996b-dw8vv,Uid:a2c796bc-ea26-4f69-bc19-822da4c56dfe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9741f3e1063b6ed8085469704681cc626645346e1f432d97cf669e55bc4c22b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.261618 kubelet[3389]: E1123 23:23:12.261556 3389 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9741f3e1063b6ed8085469704681cc626645346e1f432d97cf669e55bc4c22b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.261618 kubelet[3389]: E1123 23:23:12.261587 3389 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9741f3e1063b6ed8085469704681cc626645346e1f432d97cf669e55bc4c22b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" Nov 23 23:23:12.261618 kubelet[3389]: E1123 23:23:12.261600 3389 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9741f3e1063b6ed8085469704681cc626645346e1f432d97cf669e55bc4c22b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" Nov 23 23:23:12.261677 kubelet[3389]: E1123 23:23:12.261624 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f55996b-dw8vv_calico-apiserver(a2c796bc-ea26-4f69-bc19-822da4c56dfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f55996b-dw8vv_calico-apiserver(a2c796bc-ea26-4f69-bc19-822da4c56dfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9741f3e1063b6ed8085469704681cc626645346e1f432d97cf669e55bc4c22b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:23:12.541233 kubelet[3389]: E1123 23:23:12.541078 3389 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Nov 23 23:23:12.541233 kubelet[3389]: E1123 23:23:12.541183 3389 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/967a6d88-39fe-4a70-87f6-08a7898b61d2-whisker-backend-key-pair podName:967a6d88-39fe-4a70-87f6-08a7898b61d2 nodeName:}" failed. No retries permitted until 2025-11-23 23:23:13.041163734 +0000 UTC m=+33.465179609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/967a6d88-39fe-4a70-87f6-08a7898b61d2-whisker-backend-key-pair") pod "whisker-69dccc4cc6-s24qf" (UID: "967a6d88-39fe-4a70-87f6-08a7898b61d2") : failed to sync secret cache: timed out waiting for the condition Nov 23 23:23:12.541233 kubelet[3389]: E1123 23:23:12.541078 3389 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Nov 23 23:23:12.541233 kubelet[3389]: E1123 23:23:12.541314 3389 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/967a6d88-39fe-4a70-87f6-08a7898b61d2-whisker-ca-bundle podName:967a6d88-39fe-4a70-87f6-08a7898b61d2 nodeName:}" failed. No retries permitted until 2025-11-23 23:23:13.041287362 +0000 UTC m=+33.465303229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/967a6d88-39fe-4a70-87f6-08a7898b61d2-whisker-ca-bundle") pod "whisker-69dccc4cc6-s24qf" (UID: "967a6d88-39fe-4a70-87f6-08a7898b61d2") : failed to sync configmap cache: timed out waiting for the condition Nov 23 23:23:12.541233 kubelet[3389]: E1123 23:23:12.541333 3389 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 23 23:23:12.541614 kubelet[3389]: E1123 23:23:12.541352 3389 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/35c155aa-09f5-410a-80b4-7376a7871f0d-config-volume podName:35c155aa-09f5-410a-80b4-7376a7871f0d nodeName:}" failed. No retries permitted until 2025-11-23 23:23:13.041345276 +0000 UTC m=+33.465361143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/35c155aa-09f5-410a-80b4-7376a7871f0d-config-volume") pod "coredns-668d6bf9bc-msbl7" (UID: "35c155aa-09f5-410a-80b4-7376a7871f0d") : failed to sync configmap cache: timed out waiting for the condition Nov 23 23:23:12.541614 kubelet[3389]: E1123 23:23:12.541093 3389 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 23 23:23:12.541614 kubelet[3389]: E1123 23:23:12.541366 3389 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c83ee34d-1893-4fb7-89f8-9378dfe640fb-config-volume podName:c83ee34d-1893-4fb7-89f8-9378dfe640fb nodeName:}" failed. No retries permitted until 2025-11-23 23:23:13.041363117 +0000 UTC m=+33.465378984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c83ee34d-1893-4fb7-89f8-9378dfe640fb-config-volume") pod "coredns-668d6bf9bc-6mrr8" (UID: "c83ee34d-1893-4fb7-89f8-9378dfe640fb") : failed to sync configmap cache: timed out waiting for the condition Nov 23 23:23:12.639382 systemd[1]: Created slice kubepods-besteffort-pod55b22252_8f3d_48bd_88cb_ddab5e9d791f.slice - libcontainer container kubepods-besteffort-pod55b22252_8f3d_48bd_88cb_ddab5e9d791f.slice. Nov 23 23:23:12.641406 containerd[1900]: time="2025-11-23T23:23:12.641372459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tjz6,Uid:55b22252-8f3d-48bd-88cb-ddab5e9d791f,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:12.680921 containerd[1900]: time="2025-11-23T23:23:12.680783435Z" level=error msg="Failed to destroy network for sandbox \"65f0b2f47816cf5ee0a997e2a3719590177d3f5e542e0458f898e14795e52dc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.682393 systemd[1]: run-netns-cni\x2dc2c4cf68\x2d8584\x2dc257\x2d2200\x2d8742c0bb4295.mount: Deactivated successfully. Nov 23 23:23:12.686356 containerd[1900]: time="2025-11-23T23:23:12.686288599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tjz6,Uid:55b22252-8f3d-48bd-88cb-ddab5e9d791f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"65f0b2f47816cf5ee0a997e2a3719590177d3f5e542e0458f898e14795e52dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.686636 kubelet[3389]: E1123 23:23:12.686610 3389 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65f0b2f47816cf5ee0a997e2a3719590177d3f5e542e0458f898e14795e52dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:12.687032 kubelet[3389]: E1123 23:23:12.686904 3389 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65f0b2f47816cf5ee0a997e2a3719590177d3f5e542e0458f898e14795e52dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7tjz6" Nov 23 23:23:12.687032 kubelet[3389]: E1123 23:23:12.686927 3389 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65f0b2f47816cf5ee0a997e2a3719590177d3f5e542e0458f898e14795e52dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7tjz6" Nov 23 23:23:12.687032 kubelet[3389]: E1123 23:23:12.686974 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7tjz6_calico-system(55b22252-8f3d-48bd-88cb-ddab5e9d791f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7tjz6_calico-system(55b22252-8f3d-48bd-88cb-ddab5e9d791f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65f0b2f47816cf5ee0a997e2a3719590177d3f5e542e0458f898e14795e52dc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:23:12.749282 containerd[1900]: time="2025-11-23T23:23:12.749256088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 23:23:13.166335 containerd[1900]: time="2025-11-23T23:23:13.166246444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6mrr8,Uid:c83ee34d-1893-4fb7-89f8-9378dfe640fb,Namespace:kube-system,Attempt:0,}" Nov 23 23:23:13.167748 containerd[1900]: time="2025-11-23T23:23:13.167705313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69dccc4cc6-s24qf,Uid:967a6d88-39fe-4a70-87f6-08a7898b61d2,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:13.175991 containerd[1900]: time="2025-11-23T23:23:13.175887593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msbl7,Uid:35c155aa-09f5-410a-80b4-7376a7871f0d,Namespace:kube-system,Attempt:0,}" Nov 23 23:23:13.223133 containerd[1900]: time="2025-11-23T23:23:13.223086933Z" level=error msg="Failed to destroy network for sandbox \"d374e3fc940952cd883fdd2bf75cf30bb7c15784669fc62cd4c625891afec4e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:13.226962 containerd[1900]: time="2025-11-23T23:23:13.226867299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6mrr8,Uid:c83ee34d-1893-4fb7-89f8-9378dfe640fb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d374e3fc940952cd883fdd2bf75cf30bb7c15784669fc62cd4c625891afec4e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:13.227591 kubelet[3389]: E1123 23:23:13.227173 3389 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d374e3fc940952cd883fdd2bf75cf30bb7c15784669fc62cd4c625891afec4e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:13.227591 kubelet[3389]: E1123 23:23:13.227232 3389 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d374e3fc940952cd883fdd2bf75cf30bb7c15784669fc62cd4c625891afec4e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6mrr8" Nov 23 23:23:13.227591 kubelet[3389]: E1123 23:23:13.227248 3389 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d374e3fc940952cd883fdd2bf75cf30bb7c15784669fc62cd4c625891afec4e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6mrr8" Nov 23 23:23:13.228742 kubelet[3389]: E1123 23:23:13.228694 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6mrr8_kube-system(c83ee34d-1893-4fb7-89f8-9378dfe640fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6mrr8_kube-system(c83ee34d-1893-4fb7-89f8-9378dfe640fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d374e3fc940952cd883fdd2bf75cf30bb7c15784669fc62cd4c625891afec4e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6mrr8" podUID="c83ee34d-1893-4fb7-89f8-9378dfe640fb" Nov 23 23:23:13.239727 containerd[1900]: time="2025-11-23T23:23:13.239648779Z" level=error msg="Failed to destroy network for sandbox \"a93045a85816c45bafb65ec55f99c8c00c2d35484b87c8120c42d61622b325b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:13.242227 containerd[1900]: time="2025-11-23T23:23:13.242196722Z" level=error msg="Failed to destroy network for sandbox \"c5b2d02e142bc4555782b5a410d930b76f7f693f4bf6974e44199dd817042741\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:13.245036 containerd[1900]: time="2025-11-23T23:23:13.245009930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69dccc4cc6-s24qf,Uid:967a6d88-39fe-4a70-87f6-08a7898b61d2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a93045a85816c45bafb65ec55f99c8c00c2d35484b87c8120c42d61622b325b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:13.245425 kubelet[3389]: E1123 23:23:13.245394 3389 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a93045a85816c45bafb65ec55f99c8c00c2d35484b87c8120c42d61622b325b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:13.245493 kubelet[3389]: E1123 23:23:13.245437 3389 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a93045a85816c45bafb65ec55f99c8c00c2d35484b87c8120c42d61622b325b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69dccc4cc6-s24qf" Nov 23 23:23:13.245493 kubelet[3389]: E1123 23:23:13.245452 3389 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a93045a85816c45bafb65ec55f99c8c00c2d35484b87c8120c42d61622b325b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69dccc4cc6-s24qf" Nov 23 23:23:13.245493 kubelet[3389]: E1123 23:23:13.245483 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69dccc4cc6-s24qf_calico-system(967a6d88-39fe-4a70-87f6-08a7898b61d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69dccc4cc6-s24qf_calico-system(967a6d88-39fe-4a70-87f6-08a7898b61d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a93045a85816c45bafb65ec55f99c8c00c2d35484b87c8120c42d61622b325b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69dccc4cc6-s24qf" podUID="967a6d88-39fe-4a70-87f6-08a7898b61d2" Nov 23 23:23:13.248462 containerd[1900]: time="2025-11-23T23:23:13.248427429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msbl7,Uid:35c155aa-09f5-410a-80b4-7376a7871f0d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b2d02e142bc4555782b5a410d930b76f7f693f4bf6974e44199dd817042741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:13.248854 kubelet[3389]: E1123 23:23:13.248801 3389 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b2d02e142bc4555782b5a410d930b76f7f693f4bf6974e44199dd817042741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:23:13.248854 kubelet[3389]: E1123 23:23:13.248834 3389 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b2d02e142bc4555782b5a410d930b76f7f693f4bf6974e44199dd817042741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-msbl7" Nov 23 23:23:13.249026 kubelet[3389]: E1123 23:23:13.248956 3389 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b2d02e142bc4555782b5a410d930b76f7f693f4bf6974e44199dd817042741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-msbl7" Nov 23 23:23:13.249129 kubelet[3389]: E1123 23:23:13.249010 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-msbl7_kube-system(35c155aa-09f5-410a-80b4-7376a7871f0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-msbl7_kube-system(35c155aa-09f5-410a-80b4-7376a7871f0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5b2d02e142bc4555782b5a410d930b76f7f693f4bf6974e44199dd817042741\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-msbl7" podUID="35c155aa-09f5-410a-80b4-7376a7871f0d" Nov 23 23:23:13.271270 systemd[1]: run-netns-cni\x2db840c22d\x2db636\x2ddb70\x2d09ef\x2d4a27ab4d03ef.mount: Deactivated successfully. Nov 23 23:23:19.290291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2504111413.mount: Deactivated successfully. Nov 23 23:23:19.671716 containerd[1900]: time="2025-11-23T23:23:19.671263223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:19.674360 containerd[1900]: time="2025-11-23T23:23:19.674330221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 23:23:19.677609 containerd[1900]: time="2025-11-23T23:23:19.677555152Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:19.681686 containerd[1900]: time="2025-11-23T23:23:19.681642885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:23:19.681993 containerd[1900]: time="2025-11-23T23:23:19.681862684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.932578668s" Nov 23 23:23:19.681993 containerd[1900]: time="2025-11-23T23:23:19.681887453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 23:23:19.691960 containerd[1900]: time="2025-11-23T23:23:19.691817710Z" level=info msg="CreateContainer within sandbox \"79d4c3639f09830fa009882c9bac7279166d8576b6c5f689c4eba6782f9b187c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 23:23:19.717514 containerd[1900]: time="2025-11-23T23:23:19.717281003Z" level=info msg="Container 23f45431d04380486f0ce808ae57a8acd2064e766ff7517ef21cadd55cee380d: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:19.734773 containerd[1900]: time="2025-11-23T23:23:19.734741403Z" level=info msg="CreateContainer within sandbox \"79d4c3639f09830fa009882c9bac7279166d8576b6c5f689c4eba6782f9b187c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"23f45431d04380486f0ce808ae57a8acd2064e766ff7517ef21cadd55cee380d\"" Nov 23 23:23:19.735647 containerd[1900]: time="2025-11-23T23:23:19.735622454Z" level=info msg="StartContainer for \"23f45431d04380486f0ce808ae57a8acd2064e766ff7517ef21cadd55cee380d\"" Nov 23 23:23:19.737507 containerd[1900]: time="2025-11-23T23:23:19.737480959Z" level=info msg="connecting to shim 23f45431d04380486f0ce808ae57a8acd2064e766ff7517ef21cadd55cee380d" address="unix:///run/containerd/s/610e43939edb5eda5a92e74c31243a103047b30d7079d82aa18fe9b224933277" protocol=ttrpc version=3 Nov 23 23:23:19.757488 systemd[1]: Started cri-containerd-23f45431d04380486f0ce808ae57a8acd2064e766ff7517ef21cadd55cee380d.scope - libcontainer container 23f45431d04380486f0ce808ae57a8acd2064e766ff7517ef21cadd55cee380d. Nov 23 23:23:19.830151 containerd[1900]: time="2025-11-23T23:23:19.830102793Z" level=info msg="StartContainer for \"23f45431d04380486f0ce808ae57a8acd2064e766ff7517ef21cadd55cee380d\" returns successfully" Nov 23 23:23:20.194089 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 23:23:20.194467 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 23:23:20.392749 kubelet[3389]: I1123 23:23:20.392711 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967a6d88-39fe-4a70-87f6-08a7898b61d2-whisker-ca-bundle\") pod \"967a6d88-39fe-4a70-87f6-08a7898b61d2\" (UID: \"967a6d88-39fe-4a70-87f6-08a7898b61d2\") " Nov 23 23:23:20.393080 kubelet[3389]: I1123 23:23:20.392762 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/967a6d88-39fe-4a70-87f6-08a7898b61d2-whisker-backend-key-pair\") pod \"967a6d88-39fe-4a70-87f6-08a7898b61d2\" (UID: \"967a6d88-39fe-4a70-87f6-08a7898b61d2\") " Nov 23 23:23:20.393080 kubelet[3389]: I1123 23:23:20.392794 3389 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2mnr\" (UniqueName: \"kubernetes.io/projected/967a6d88-39fe-4a70-87f6-08a7898b61d2-kube-api-access-t2mnr\") pod \"967a6d88-39fe-4a70-87f6-08a7898b61d2\" (UID: \"967a6d88-39fe-4a70-87f6-08a7898b61d2\") " Nov 23 23:23:20.395539 kubelet[3389]: I1123 23:23:20.395505 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/967a6d88-39fe-4a70-87f6-08a7898b61d2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "967a6d88-39fe-4a70-87f6-08a7898b61d2" (UID: "967a6d88-39fe-4a70-87f6-08a7898b61d2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 23:23:20.399937 systemd[1]: var-lib-kubelet-pods-967a6d88\x2d39fe\x2d4a70\x2d87f6\x2d08a7898b61d2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 23:23:20.403014 kubelet[3389]: I1123 23:23:20.402898 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/967a6d88-39fe-4a70-87f6-08a7898b61d2-kube-api-access-t2mnr" (OuterVolumeSpecName: "kube-api-access-t2mnr") pod "967a6d88-39fe-4a70-87f6-08a7898b61d2" (UID: "967a6d88-39fe-4a70-87f6-08a7898b61d2"). InnerVolumeSpecName "kube-api-access-t2mnr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 23:23:20.403355 systemd[1]: var-lib-kubelet-pods-967a6d88\x2d39fe\x2d4a70\x2d87f6\x2d08a7898b61d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt2mnr.mount: Deactivated successfully. Nov 23 23:23:20.404549 kubelet[3389]: I1123 23:23:20.404314 3389 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/967a6d88-39fe-4a70-87f6-08a7898b61d2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "967a6d88-39fe-4a70-87f6-08a7898b61d2" (UID: "967a6d88-39fe-4a70-87f6-08a7898b61d2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 23:23:20.493755 kubelet[3389]: I1123 23:23:20.493657 3389 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/967a6d88-39fe-4a70-87f6-08a7898b61d2-whisker-backend-key-pair\") on node \"ci-4459.2.1-a-2a92a9cf5f\" DevicePath \"\"" Nov 23 23:23:20.493755 kubelet[3389]: I1123 23:23:20.493691 3389 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t2mnr\" (UniqueName: \"kubernetes.io/projected/967a6d88-39fe-4a70-87f6-08a7898b61d2-kube-api-access-t2mnr\") on node \"ci-4459.2.1-a-2a92a9cf5f\" DevicePath \"\"" Nov 23 23:23:20.493755 kubelet[3389]: I1123 23:23:20.493699 3389 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/967a6d88-39fe-4a70-87f6-08a7898b61d2-whisker-ca-bundle\") on node \"ci-4459.2.1-a-2a92a9cf5f\" DevicePath \"\"" Nov 23 23:23:20.778166 systemd[1]: Removed slice kubepods-besteffort-pod967a6d88_39fe_4a70_87f6_08a7898b61d2.slice - libcontainer container kubepods-besteffort-pod967a6d88_39fe_4a70_87f6_08a7898b61d2.slice. Nov 23 23:23:20.793187 kubelet[3389]: I1123 23:23:20.793020 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lrk2z" podStartSLOduration=2.451222068 podStartE2EDuration="20.793006549s" podCreationTimestamp="2025-11-23 23:23:00 +0000 UTC" firstStartedPulling="2025-11-23 23:23:01.340866435 +0000 UTC m=+21.764882310" lastFinishedPulling="2025-11-23 23:23:19.682650924 +0000 UTC m=+40.106666791" observedRunningTime="2025-11-23 23:23:20.788904463 +0000 UTC m=+41.212920330" watchObservedRunningTime="2025-11-23 23:23:20.793006549 +0000 UTC m=+41.217022416" Nov 23 23:23:20.846749 systemd[1]: Created slice kubepods-besteffort-pod76b4b8c0_aa90_4b9a_b362_2f2d8cafe8c7.slice - libcontainer container kubepods-besteffort-pod76b4b8c0_aa90_4b9a_b362_2f2d8cafe8c7.slice. Nov 23 23:23:20.895608 kubelet[3389]: I1123 23:23:20.895559 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g64k9\" (UniqueName: \"kubernetes.io/projected/76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7-kube-api-access-g64k9\") pod \"whisker-697f894766-kwrfw\" (UID: \"76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7\") " pod="calico-system/whisker-697f894766-kwrfw" Nov 23 23:23:20.895608 kubelet[3389]: I1123 23:23:20.895597 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7-whisker-ca-bundle\") pod \"whisker-697f894766-kwrfw\" (UID: \"76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7\") " pod="calico-system/whisker-697f894766-kwrfw" Nov 23 23:23:20.896044 kubelet[3389]: I1123 23:23:20.895636 3389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7-whisker-backend-key-pair\") pod \"whisker-697f894766-kwrfw\" (UID: \"76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7\") " pod="calico-system/whisker-697f894766-kwrfw" Nov 23 23:23:21.150795 containerd[1900]: time="2025-11-23T23:23:21.150742541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-697f894766-kwrfw,Uid:76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:21.257440 systemd-networkd[1479]: calibc23c14b9fe: Link UP Nov 23 23:23:21.260557 systemd-networkd[1479]: calibc23c14b9fe: Gained carrier Nov 23 23:23:21.280290 containerd[1900]: 2025-11-23 23:23:21.174 [INFO][4498] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:23:21.280290 containerd[1900]: 2025-11-23 23:23:21.194 [INFO][4498] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-eth0 whisker-697f894766- calico-system 76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7 927 0 2025-11-23 23:23:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:697f894766 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.1-a-2a92a9cf5f whisker-697f894766-kwrfw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibc23c14b9fe [] [] }} ContainerID="01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" Namespace="calico-system" Pod="whisker-697f894766-kwrfw" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-" Nov 23 23:23:21.280290 containerd[1900]: 2025-11-23 23:23:21.194 [INFO][4498] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" Namespace="calico-system" Pod="whisker-697f894766-kwrfw" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-eth0" Nov 23 23:23:21.280290 containerd[1900]: 2025-11-23 23:23:21.212 [INFO][4510] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" HandleID="k8s-pod-network.01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-eth0" Nov 23 23:23:21.280760 containerd[1900]: 2025-11-23 23:23:21.212 [INFO][4510] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" HandleID="k8s-pod-network.01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-2a92a9cf5f", "pod":"whisker-697f894766-kwrfw", "timestamp":"2025-11-23 23:23:21.212456035 +0000 UTC"}, Hostname:"ci-4459.2.1-a-2a92a9cf5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:21.280760 containerd[1900]: 2025-11-23 23:23:21.212 [INFO][4510] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:21.280760 containerd[1900]: 2025-11-23 23:23:21.212 [INFO][4510] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:21.280760 containerd[1900]: 2025-11-23 23:23:21.212 [INFO][4510] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-2a92a9cf5f' Nov 23 23:23:21.280760 containerd[1900]: 2025-11-23 23:23:21.217 [INFO][4510] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:21.280760 containerd[1900]: 2025-11-23 23:23:21.220 [INFO][4510] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:21.280760 containerd[1900]: 2025-11-23 23:23:21.223 [INFO][4510] ipam/ipam.go 511: Trying affinity for 192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:21.280760 containerd[1900]: 2025-11-23 23:23:21.224 [INFO][4510] ipam/ipam.go 158: Attempting to load block cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:21.280760 containerd[1900]: 2025-11-23 23:23:21.226 [INFO][4510] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:21.281329 containerd[1900]: 2025-11-23 23:23:21.226 [INFO][4510] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:21.281329 containerd[1900]: 2025-11-23 23:23:21.227 [INFO][4510] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c Nov 23 23:23:21.281329 containerd[1900]: 2025-11-23 23:23:21.231 [INFO][4510] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:21.281329 containerd[1900]: 2025-11-23 23:23:21.242 [INFO][4510] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.53.193/26] block=192.168.53.192/26 handle="k8s-pod-network.01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:21.281329 containerd[1900]: 2025-11-23 23:23:21.243 [INFO][4510] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.53.193/26] handle="k8s-pod-network.01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:21.281329 containerd[1900]: 2025-11-23 23:23:21.243 [INFO][4510] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:21.281329 containerd[1900]: 2025-11-23 23:23:21.243 [INFO][4510] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.53.193/26] IPv6=[] ContainerID="01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" HandleID="k8s-pod-network.01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-eth0" Nov 23 23:23:21.282196 containerd[1900]: 2025-11-23 23:23:21.247 [INFO][4498] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" Namespace="calico-system" Pod="whisker-697f894766-kwrfw" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-eth0", GenerateName:"whisker-697f894766-", Namespace:"calico-system", SelfLink:"", UID:"76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"697f894766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"", Pod:"whisker-697f894766-kwrfw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.53.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc23c14b9fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:21.282196 containerd[1900]: 2025-11-23 23:23:21.247 [INFO][4498] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.53.193/32] ContainerID="01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" Namespace="calico-system" Pod="whisker-697f894766-kwrfw" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-eth0" Nov 23 23:23:21.282259 containerd[1900]: 2025-11-23 23:23:21.247 [INFO][4498] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc23c14b9fe ContainerID="01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" Namespace="calico-system" Pod="whisker-697f894766-kwrfw" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-eth0" Nov 23 23:23:21.282259 containerd[1900]: 2025-11-23 23:23:21.261 [INFO][4498] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" Namespace="calico-system" Pod="whisker-697f894766-kwrfw" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-eth0" Nov 23 23:23:21.282287 containerd[1900]: 2025-11-23 23:23:21.261 [INFO][4498] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" Namespace="calico-system" Pod="whisker-697f894766-kwrfw" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-eth0", GenerateName:"whisker-697f894766-", Namespace:"calico-system", SelfLink:"", UID:"76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"697f894766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c", Pod:"whisker-697f894766-kwrfw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.53.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibc23c14b9fe", MAC:"56:4b:c7:2b:d8:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:21.282393 containerd[1900]: 2025-11-23 23:23:21.277 [INFO][4498] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" Namespace="calico-system" Pod="whisker-697f894766-kwrfw" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-whisker--697f894766--kwrfw-eth0" Nov 23 23:23:21.331886 containerd[1900]: time="2025-11-23T23:23:21.331849123Z" level=info msg="connecting to shim 01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c" address="unix:///run/containerd/s/9b11719f0fd88b90c092291ba3c027a142718cb15653bfe38bf98ffb0b294c60" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:21.350435 systemd[1]: Started cri-containerd-01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c.scope - libcontainer container 01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c. Nov 23 23:23:21.379022 containerd[1900]: time="2025-11-23T23:23:21.378993929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-697f894766-kwrfw,Uid:76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"01fe23708baf1f96c7ec0d10f498c147bb2e95325460d57bde538c0585f7a51c\"" Nov 23 23:23:21.380911 containerd[1900]: time="2025-11-23T23:23:21.380732655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:23:21.640168 kubelet[3389]: I1123 23:23:21.639920 3389 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="967a6d88-39fe-4a70-87f6-08a7898b61d2" path="/var/lib/kubelet/pods/967a6d88-39fe-4a70-87f6-08a7898b61d2/volumes" Nov 23 23:23:21.648781 containerd[1900]: time="2025-11-23T23:23:21.648749959Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:21.651522 containerd[1900]: time="2025-11-23T23:23:21.651478194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:23:21.651716 containerd[1900]: time="2025-11-23T23:23:21.651533764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:23:21.651853 kubelet[3389]: E1123 23:23:21.651630 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:23:21.651853 kubelet[3389]: E1123 23:23:21.651686 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:23:21.655884 kubelet[3389]: E1123 23:23:21.655801 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:242e1aff532942968466fb3afe4b9750,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g64k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-697f894766-kwrfw_calico-system(76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:21.658024 containerd[1900]: time="2025-11-23T23:23:21.657977946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:23:21.901849 containerd[1900]: time="2025-11-23T23:23:21.901817124Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:21.905713 containerd[1900]: time="2025-11-23T23:23:21.905671810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:23:21.906401 containerd[1900]: time="2025-11-23T23:23:21.905741413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:23:21.906440 kubelet[3389]: E1123 23:23:21.906406 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:23:21.906486 kubelet[3389]: E1123 23:23:21.906461 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:23:21.906597 kubelet[3389]: E1123 23:23:21.906563 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g64k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-697f894766-kwrfw_calico-system(76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:21.907878 kubelet[3389]: E1123 23:23:21.907840 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:23:21.922863 systemd-networkd[1479]: vxlan.calico: Link UP Nov 23 23:23:21.922868 systemd-networkd[1479]: vxlan.calico: Gained carrier Nov 23 23:23:22.780699 kubelet[3389]: E1123 23:23:22.780311 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:23:22.866418 systemd-networkd[1479]: calibc23c14b9fe: Gained IPv6LL Nov 23 23:23:23.637565 containerd[1900]: time="2025-11-23T23:23:23.637519871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f55996b-nzxhv,Uid:a24d7295-0bba-4952-852f-37a344f80dea,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:23:23.638520 containerd[1900]: time="2025-11-23T23:23:23.638045855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tnnrs,Uid:af6e60cf-c463-457e-be42-88e0f43ba038,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:23.638520 containerd[1900]: time="2025-11-23T23:23:23.637846937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f55996b-dw8vv,Uid:a2c796bc-ea26-4f69-bc19-822da4c56dfe,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:23:23.698461 systemd-networkd[1479]: vxlan.calico: Gained IPv6LL Nov 23 23:23:23.782517 systemd-networkd[1479]: cali5c63ce352e0: Link UP Nov 23 23:23:23.782910 systemd-networkd[1479]: cali5c63ce352e0: Gained carrier Nov 23 23:23:23.797871 containerd[1900]: 2025-11-23 23:23:23.707 [INFO][4783] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-eth0 calico-apiserver-5f55996b- calico-apiserver a2c796bc-ea26-4f69-bc19-822da4c56dfe 855 0 2025-11-23 23:22:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f55996b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-a-2a92a9cf5f calico-apiserver-5f55996b-dw8vv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5c63ce352e0 [] [] }} ContainerID="92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-dw8vv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-" Nov 23 23:23:23.797871 containerd[1900]: 2025-11-23 23:23:23.707 [INFO][4783] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-dw8vv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-eth0" Nov 23 23:23:23.797871 containerd[1900]: 2025-11-23 23:23:23.735 [INFO][4812] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" HandleID="k8s-pod-network.92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-eth0" Nov 23 23:23:23.798010 containerd[1900]: 2025-11-23 23:23:23.736 [INFO][4812] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" HandleID="k8s-pod-network.92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-a-2a92a9cf5f", "pod":"calico-apiserver-5f55996b-dw8vv", "timestamp":"2025-11-23 23:23:23.735987829 +0000 UTC"}, Hostname:"ci-4459.2.1-a-2a92a9cf5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:23.798010 containerd[1900]: 2025-11-23 23:23:23.736 [INFO][4812] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:23.798010 containerd[1900]: 2025-11-23 23:23:23.736 [INFO][4812] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:23.798010 containerd[1900]: 2025-11-23 23:23:23.736 [INFO][4812] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-2a92a9cf5f' Nov 23 23:23:23.798010 containerd[1900]: 2025-11-23 23:23:23.745 [INFO][4812] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.798010 containerd[1900]: 2025-11-23 23:23:23.749 [INFO][4812] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.798010 containerd[1900]: 2025-11-23 23:23:23.754 [INFO][4812] ipam/ipam.go 511: Trying affinity for 192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.798010 containerd[1900]: 2025-11-23 23:23:23.756 [INFO][4812] ipam/ipam.go 158: Attempting to load block cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.798010 containerd[1900]: 2025-11-23 23:23:23.758 [INFO][4812] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.798344 containerd[1900]: 2025-11-23 23:23:23.758 [INFO][4812] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.798344 containerd[1900]: 2025-11-23 23:23:23.759 [INFO][4812] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235 Nov 23 23:23:23.798344 containerd[1900]: 2025-11-23 23:23:23.764 [INFO][4812] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.798344 containerd[1900]: 2025-11-23 23:23:23.774 [INFO][4812] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.53.194/26] block=192.168.53.192/26 handle="k8s-pod-network.92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.798344 containerd[1900]: 2025-11-23 23:23:23.774 [INFO][4812] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.53.194/26] handle="k8s-pod-network.92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.798344 containerd[1900]: 2025-11-23 23:23:23.774 [INFO][4812] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:23.798344 containerd[1900]: 2025-11-23 23:23:23.774 [INFO][4812] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.53.194/26] IPv6=[] ContainerID="92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" HandleID="k8s-pod-network.92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-eth0" Nov 23 23:23:23.798444 containerd[1900]: 2025-11-23 23:23:23.779 [INFO][4783] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-dw8vv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-eth0", GenerateName:"calico-apiserver-5f55996b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2c796bc-ea26-4f69-bc19-822da4c56dfe", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f55996b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"", Pod:"calico-apiserver-5f55996b-dw8vv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c63ce352e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:23.798482 containerd[1900]: 2025-11-23 23:23:23.779 [INFO][4783] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.53.194/32] ContainerID="92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-dw8vv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-eth0" Nov 23 23:23:23.798482 containerd[1900]: 2025-11-23 23:23:23.779 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c63ce352e0 ContainerID="92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-dw8vv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-eth0" Nov 23 23:23:23.798482 containerd[1900]: 2025-11-23 23:23:23.783 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-dw8vv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-eth0" Nov 23 23:23:23.798524 containerd[1900]: 2025-11-23 23:23:23.783 [INFO][4783] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-dw8vv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-eth0", GenerateName:"calico-apiserver-5f55996b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2c796bc-ea26-4f69-bc19-822da4c56dfe", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f55996b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235", Pod:"calico-apiserver-5f55996b-dw8vv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5c63ce352e0", MAC:"b6:a1:17:ac:9a:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:23.798554 containerd[1900]: 2025-11-23 23:23:23.794 [INFO][4783] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-dw8vv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--dw8vv-eth0" Nov 23 23:23:23.854213 containerd[1900]: time="2025-11-23T23:23:23.853558716Z" level=info msg="connecting to shim 92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235" address="unix:///run/containerd/s/d2f85d7926ab6e4ef55f1e3d05894527a71040ee45a65da9791e5090377bb845" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:23.878418 systemd[1]: Started cri-containerd-92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235.scope - libcontainer container 92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235. Nov 23 23:23:23.906420 containerd[1900]: time="2025-11-23T23:23:23.906393146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f55996b-dw8vv,Uid:a2c796bc-ea26-4f69-bc19-822da4c56dfe,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"92d398a2b50d7d727f298c85197947aff023bada217cedea341d466b2a1e4235\"" Nov 23 23:23:23.908788 containerd[1900]: time="2025-11-23T23:23:23.908717337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:23:23.927334 systemd-networkd[1479]: caliaa9b5ac0840: Link UP Nov 23 23:23:23.928325 systemd-networkd[1479]: caliaa9b5ac0840: Gained carrier Nov 23 23:23:23.951739 containerd[1900]: 2025-11-23 23:23:23.696 [INFO][4772] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-eth0 calico-apiserver-5f55996b- calico-apiserver a24d7295-0bba-4952-852f-37a344f80dea 844 0 2025-11-23 23:22:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f55996b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-a-2a92a9cf5f calico-apiserver-5f55996b-nzxhv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaa9b5ac0840 [] [] }} ContainerID="51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-nzxhv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-" Nov 23 23:23:23.951739 containerd[1900]: 2025-11-23 23:23:23.696 [INFO][4772] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-nzxhv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-eth0" Nov 23 23:23:23.951739 containerd[1900]: 2025-11-23 23:23:23.745 [INFO][4810] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" HandleID="k8s-pod-network.51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-eth0" Nov 23 23:23:23.951897 containerd[1900]: 2025-11-23 23:23:23.745 [INFO][4810] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" HandleID="k8s-pod-network.51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3b30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-a-2a92a9cf5f", "pod":"calico-apiserver-5f55996b-nzxhv", "timestamp":"2025-11-23 23:23:23.745276322 +0000 UTC"}, Hostname:"ci-4459.2.1-a-2a92a9cf5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:23.951897 containerd[1900]: 2025-11-23 23:23:23.745 [INFO][4810] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:23.951897 containerd[1900]: 2025-11-23 23:23:23.775 [INFO][4810] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:23.951897 containerd[1900]: 2025-11-23 23:23:23.775 [INFO][4810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-2a92a9cf5f' Nov 23 23:23:23.951897 containerd[1900]: 2025-11-23 23:23:23.846 [INFO][4810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.951897 containerd[1900]: 2025-11-23 23:23:23.853 [INFO][4810] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.951897 containerd[1900]: 2025-11-23 23:23:23.858 [INFO][4810] ipam/ipam.go 511: Trying affinity for 192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.951897 containerd[1900]: 2025-11-23 23:23:23.864 [INFO][4810] ipam/ipam.go 158: Attempting to load block cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.951897 containerd[1900]: 2025-11-23 23:23:23.866 [INFO][4810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.952094 containerd[1900]: 2025-11-23 23:23:23.866 [INFO][4810] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.952094 containerd[1900]: 2025-11-23 23:23:23.867 [INFO][4810] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213 Nov 23 23:23:23.952094 containerd[1900]: 2025-11-23 23:23:23.908 [INFO][4810] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.952094 containerd[1900]: 2025-11-23 23:23:23.917 [INFO][4810] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.53.195/26] block=192.168.53.192/26 handle="k8s-pod-network.51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.952094 containerd[1900]: 2025-11-23 23:23:23.917 [INFO][4810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.53.195/26] handle="k8s-pod-network.51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:23.952094 containerd[1900]: 2025-11-23 23:23:23.917 [INFO][4810] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:23.952094 containerd[1900]: 2025-11-23 23:23:23.917 [INFO][4810] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.53.195/26] IPv6=[] ContainerID="51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" HandleID="k8s-pod-network.51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-eth0" Nov 23 23:23:23.952323 containerd[1900]: 2025-11-23 23:23:23.923 [INFO][4772] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-nzxhv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-eth0", GenerateName:"calico-apiserver-5f55996b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a24d7295-0bba-4952-852f-37a344f80dea", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f55996b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"", Pod:"calico-apiserver-5f55996b-nzxhv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa9b5ac0840", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:23.952379 containerd[1900]: 2025-11-23 23:23:23.923 [INFO][4772] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.53.195/32] ContainerID="51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-nzxhv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-eth0" Nov 23 23:23:23.952379 containerd[1900]: 2025-11-23 23:23:23.923 [INFO][4772] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa9b5ac0840 ContainerID="51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-nzxhv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-eth0" Nov 23 23:23:23.952379 containerd[1900]: 2025-11-23 23:23:23.929 [INFO][4772] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-nzxhv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-eth0" Nov 23 23:23:23.952428 containerd[1900]: 2025-11-23 23:23:23.929 [INFO][4772] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-nzxhv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-eth0", GenerateName:"calico-apiserver-5f55996b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a24d7295-0bba-4952-852f-37a344f80dea", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f55996b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213", Pod:"calico-apiserver-5f55996b-nzxhv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa9b5ac0840", MAC:"7e:1f:96:75:71:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:23.952465 containerd[1900]: 2025-11-23 23:23:23.948 [INFO][4772] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" Namespace="calico-apiserver" Pod="calico-apiserver-5f55996b-nzxhv" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--5f55996b--nzxhv-eth0" Nov 23 23:23:23.987234 systemd-networkd[1479]: calib2a64f1359a: Link UP Nov 23 23:23:23.987780 systemd-networkd[1479]: calib2a64f1359a: Gained carrier Nov 23 23:23:24.002520 containerd[1900]: time="2025-11-23T23:23:24.002465134Z" level=info msg="connecting to shim 51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213" address="unix:///run/containerd/s/85c0eeef2e07238c18e91a12d1e5d0a6a0efd0754c58d7b633daf8081fc4754d" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:24.009279 containerd[1900]: 2025-11-23 23:23:23.711 [INFO][4792] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-eth0 goldmane-666569f655- calico-system af6e60cf-c463-457e-be42-88e0f43ba038 858 0 2025-11-23 23:22:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.1-a-2a92a9cf5f goldmane-666569f655-tnnrs eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib2a64f1359a [] [] }} ContainerID="36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" Namespace="calico-system" Pod="goldmane-666569f655-tnnrs" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-" Nov 23 23:23:24.009279 containerd[1900]: 2025-11-23 23:23:23.711 [INFO][4792] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" Namespace="calico-system" Pod="goldmane-666569f655-tnnrs" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-eth0" Nov 23 23:23:24.009279 containerd[1900]: 2025-11-23 23:23:23.753 [INFO][4820] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" HandleID="k8s-pod-network.36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-eth0" Nov 23 23:23:24.009418 containerd[1900]: 2025-11-23 23:23:23.753 [INFO][4820] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" HandleID="k8s-pod-network.36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-2a92a9cf5f", "pod":"goldmane-666569f655-tnnrs", "timestamp":"2025-11-23 23:23:23.753031224 +0000 UTC"}, Hostname:"ci-4459.2.1-a-2a92a9cf5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:24.009418 containerd[1900]: 2025-11-23 23:23:23.753 [INFO][4820] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:24.009418 containerd[1900]: 2025-11-23 23:23:23.917 [INFO][4820] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:24.009418 containerd[1900]: 2025-11-23 23:23:23.918 [INFO][4820] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-2a92a9cf5f' Nov 23 23:23:24.009418 containerd[1900]: 2025-11-23 23:23:23.948 [INFO][4820] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:24.009418 containerd[1900]: 2025-11-23 23:23:23.954 [INFO][4820] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:24.009418 containerd[1900]: 2025-11-23 23:23:23.958 [INFO][4820] ipam/ipam.go 511: Trying affinity for 192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:24.009418 containerd[1900]: 2025-11-23 23:23:23.960 [INFO][4820] ipam/ipam.go 158: Attempting to load block cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:24.009418 containerd[1900]: 2025-11-23 23:23:23.961 [INFO][4820] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:24.009554 containerd[1900]: 2025-11-23 23:23:23.961 [INFO][4820] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:24.009554 containerd[1900]: 2025-11-23 23:23:23.963 [INFO][4820] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e Nov 23 23:23:24.009554 containerd[1900]: 2025-11-23 23:23:23.967 [INFO][4820] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:24.009554 containerd[1900]: 2025-11-23 23:23:23.976 [INFO][4820] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.53.196/26] block=192.168.53.192/26 handle="k8s-pod-network.36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:24.009554 containerd[1900]: 2025-11-23 23:23:23.981 [INFO][4820] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.53.196/26] handle="k8s-pod-network.36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:24.009554 containerd[1900]: 2025-11-23 23:23:23.981 [INFO][4820] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:24.009554 containerd[1900]: 2025-11-23 23:23:23.981 [INFO][4820] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.53.196/26] IPv6=[] ContainerID="36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" HandleID="k8s-pod-network.36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-eth0" Nov 23 23:23:24.009649 containerd[1900]: 2025-11-23 23:23:23.982 [INFO][4792] cni-plugin/k8s.go 418: Populated endpoint ContainerID="36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" Namespace="calico-system" Pod="goldmane-666569f655-tnnrs" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"af6e60cf-c463-457e-be42-88e0f43ba038", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"", Pod:"goldmane-666569f655-tnnrs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.53.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib2a64f1359a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:24.010388 containerd[1900]: 2025-11-23 23:23:23.982 [INFO][4792] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.53.196/32] ContainerID="36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" Namespace="calico-system" Pod="goldmane-666569f655-tnnrs" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-eth0" Nov 23 23:23:24.010388 containerd[1900]: 2025-11-23 23:23:23.982 [INFO][4792] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2a64f1359a ContainerID="36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" Namespace="calico-system" Pod="goldmane-666569f655-tnnrs" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-eth0" Nov 23 23:23:24.010388 containerd[1900]: 2025-11-23 23:23:23.988 [INFO][4792] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" Namespace="calico-system" Pod="goldmane-666569f655-tnnrs" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-eth0" Nov 23 23:23:24.010447 containerd[1900]: 2025-11-23 23:23:23.989 [INFO][4792] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" Namespace="calico-system" Pod="goldmane-666569f655-tnnrs" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"af6e60cf-c463-457e-be42-88e0f43ba038", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e", Pod:"goldmane-666569f655-tnnrs", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.53.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib2a64f1359a", MAC:"32:81:28:99:56:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:24.010479 containerd[1900]: 2025-11-23 23:23:24.006 [INFO][4792] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" Namespace="calico-system" Pod="goldmane-666569f655-tnnrs" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-goldmane--666569f655--tnnrs-eth0" Nov 23 23:23:24.029457 systemd[1]: Started cri-containerd-51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213.scope - libcontainer container 51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213. Nov 23 23:23:24.062331 containerd[1900]: time="2025-11-23T23:23:24.062240184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f55996b-nzxhv,Uid:a24d7295-0bba-4952-852f-37a344f80dea,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"51701cdeb052d2611f3b9482f6ecf60631f2fe9728e4b12cdf5d295e852a9213\"" Nov 23 23:23:24.067445 containerd[1900]: time="2025-11-23T23:23:24.067404614Z" level=info msg="connecting to shim 36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e" address="unix:///run/containerd/s/cd44a3c00158c45b1a268d0ce0313d3d9dd1256186e481ac16635beafbe43151" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:24.084429 systemd[1]: Started cri-containerd-36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e.scope - libcontainer container 36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e. Nov 23 23:23:24.113374 containerd[1900]: time="2025-11-23T23:23:24.113344584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-tnnrs,Uid:af6e60cf-c463-457e-be42-88e0f43ba038,Namespace:calico-system,Attempt:0,} returns sandbox id \"36a7299721d6ef0c806fd74031d677475ac811b280cf48f846fa5e0da9753d0e\"" Nov 23 23:23:24.150572 containerd[1900]: time="2025-11-23T23:23:24.150515900Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:24.153649 containerd[1900]: time="2025-11-23T23:23:24.153618636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:23:24.153795 containerd[1900]: time="2025-11-23T23:23:24.153676373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:23:24.153985 kubelet[3389]: E1123 23:23:24.153929 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:24.156179 kubelet[3389]: E1123 23:23:24.154248 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:24.156179 kubelet[3389]: E1123 23:23:24.155224 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqlfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f55996b-dw8vv_calico-apiserver(a2c796bc-ea26-4f69-bc19-822da4c56dfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:24.156342 containerd[1900]: time="2025-11-23T23:23:24.154646171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:23:24.156586 kubelet[3389]: E1123 23:23:24.156511 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:23:24.374057 containerd[1900]: time="2025-11-23T23:23:24.374017167Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:24.377810 containerd[1900]: time="2025-11-23T23:23:24.377726744Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:23:24.377810 containerd[1900]: time="2025-11-23T23:23:24.377787914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:23:24.378082 kubelet[3389]: E1123 23:23:24.378035 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:24.378135 kubelet[3389]: E1123 23:23:24.378090 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:24.378585 kubelet[3389]: E1123 23:23:24.378354 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2hkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f55996b-nzxhv_calico-apiserver(a24d7295-0bba-4952-852f-37a344f80dea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:24.378828 containerd[1900]: time="2025-11-23T23:23:24.378796153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:23:24.379962 kubelet[3389]: E1123 23:23:24.379929 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:23:24.610337 containerd[1900]: time="2025-11-23T23:23:24.610196358Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:24.613411 containerd[1900]: time="2025-11-23T23:23:24.613378576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:23:24.613551 containerd[1900]: time="2025-11-23T23:23:24.613454986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:23:24.613723 kubelet[3389]: E1123 23:23:24.613637 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:23:24.613723 kubelet[3389]: E1123 23:23:24.613702 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:23:24.613966 kubelet[3389]: E1123 23:23:24.613930 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxg8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tnnrs_calico-system(af6e60cf-c463-457e-be42-88e0f43ba038): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:24.615362 kubelet[3389]: E1123 23:23:24.615321 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:23:24.784402 kubelet[3389]: E1123 23:23:24.784261 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:23:24.786565 kubelet[3389]: E1123 23:23:24.786408 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:23:24.787805 kubelet[3389]: E1123 23:23:24.787774 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:23:25.234454 systemd-networkd[1479]: cali5c63ce352e0: Gained IPv6LL Nov 23 23:23:25.362426 systemd-networkd[1479]: calib2a64f1359a: Gained IPv6LL Nov 23 23:23:25.362662 systemd-networkd[1479]: caliaa9b5ac0840: Gained IPv6LL Nov 23 23:23:25.637221 containerd[1900]: time="2025-11-23T23:23:25.636995016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69658b8f65-kmrst,Uid:121b3bf3-7703-467a-986e-3619eec56340,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:25.637973 containerd[1900]: time="2025-11-23T23:23:25.637156260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msbl7,Uid:35c155aa-09f5-410a-80b4-7376a7871f0d,Namespace:kube-system,Attempt:0,}" Nov 23 23:23:25.638151 containerd[1900]: time="2025-11-23T23:23:25.637615075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d6799dd75-7vqw6,Uid:862cceb0-6c7b-4371-aa49-25852268d3a1,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:23:25.775479 systemd-networkd[1479]: cali7e6a6357301: Link UP Nov 23 23:23:25.776228 systemd-networkd[1479]: cali7e6a6357301: Gained carrier Nov 23 23:23:25.791690 kubelet[3389]: E1123 23:23:25.791537 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:23:25.791690 kubelet[3389]: E1123 23:23:25.791496 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:23:25.792554 kubelet[3389]: E1123 23:23:25.791724 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:23:25.792594 containerd[1900]: 2025-11-23 23:23:25.691 [INFO][5003] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-eth0 calico-kube-controllers-69658b8f65- calico-system 121b3bf3-7703-467a-986e-3619eec56340 857 0 2025-11-23 23:23:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69658b8f65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.1-a-2a92a9cf5f calico-kube-controllers-69658b8f65-kmrst eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7e6a6357301 [] [] }} ContainerID="0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" Namespace="calico-system" Pod="calico-kube-controllers-69658b8f65-kmrst" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-" Nov 23 23:23:25.792594 containerd[1900]: 2025-11-23 23:23:25.691 [INFO][5003] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" Namespace="calico-system" Pod="calico-kube-controllers-69658b8f65-kmrst" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-eth0" Nov 23 23:23:25.792594 containerd[1900]: 2025-11-23 23:23:25.730 [INFO][5038] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" HandleID="k8s-pod-network.0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-eth0" Nov 23 23:23:25.792669 containerd[1900]: 2025-11-23 23:23:25.732 [INFO][5038] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" HandleID="k8s-pod-network.0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-2a92a9cf5f", "pod":"calico-kube-controllers-69658b8f65-kmrst", "timestamp":"2025-11-23 23:23:25.730935879 +0000 UTC"}, Hostname:"ci-4459.2.1-a-2a92a9cf5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:25.792669 containerd[1900]: 2025-11-23 23:23:25.734 [INFO][5038] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:25.792669 containerd[1900]: 2025-11-23 23:23:25.734 [INFO][5038] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:25.792669 containerd[1900]: 2025-11-23 23:23:25.734 [INFO][5038] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-2a92a9cf5f' Nov 23 23:23:25.792669 containerd[1900]: 2025-11-23 23:23:25.740 [INFO][5038] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.792669 containerd[1900]: 2025-11-23 23:23:25.745 [INFO][5038] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.792669 containerd[1900]: 2025-11-23 23:23:25.748 [INFO][5038] ipam/ipam.go 511: Trying affinity for 192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.792669 containerd[1900]: 2025-11-23 23:23:25.750 [INFO][5038] ipam/ipam.go 158: Attempting to load block cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.792669 containerd[1900]: 2025-11-23 23:23:25.751 [INFO][5038] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.794602 containerd[1900]: 2025-11-23 23:23:25.751 [INFO][5038] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.794602 containerd[1900]: 2025-11-23 23:23:25.753 [INFO][5038] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978 Nov 23 23:23:25.794602 containerd[1900]: 2025-11-23 23:23:25.761 [INFO][5038] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.794602 containerd[1900]: 2025-11-23 23:23:25.766 [INFO][5038] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.53.197/26] block=192.168.53.192/26 handle="k8s-pod-network.0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.794602 containerd[1900]: 2025-11-23 23:23:25.766 [INFO][5038] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.53.197/26] handle="k8s-pod-network.0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.794602 containerd[1900]: 2025-11-23 23:23:25.766 [INFO][5038] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:25.794602 containerd[1900]: 2025-11-23 23:23:25.766 [INFO][5038] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.53.197/26] IPv6=[] ContainerID="0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" HandleID="k8s-pod-network.0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-eth0" Nov 23 23:23:25.794734 containerd[1900]: 2025-11-23 23:23:25.769 [INFO][5003] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" Namespace="calico-system" Pod="calico-kube-controllers-69658b8f65-kmrst" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-eth0", GenerateName:"calico-kube-controllers-69658b8f65-", Namespace:"calico-system", SelfLink:"", UID:"121b3bf3-7703-467a-986e-3619eec56340", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69658b8f65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"", Pod:"calico-kube-controllers-69658b8f65-kmrst", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7e6a6357301", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:25.794826 containerd[1900]: 2025-11-23 23:23:25.769 [INFO][5003] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.53.197/32] ContainerID="0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" Namespace="calico-system" Pod="calico-kube-controllers-69658b8f65-kmrst" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-eth0" Nov 23 23:23:25.794826 containerd[1900]: 2025-11-23 23:23:25.769 [INFO][5003] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e6a6357301 ContainerID="0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" Namespace="calico-system" Pod="calico-kube-controllers-69658b8f65-kmrst" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-eth0" Nov 23 23:23:25.794826 containerd[1900]: 2025-11-23 23:23:25.776 [INFO][5003] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" Namespace="calico-system" Pod="calico-kube-controllers-69658b8f65-kmrst" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-eth0" Nov 23 23:23:25.794904 containerd[1900]: 2025-11-23 23:23:25.777 [INFO][5003] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" Namespace="calico-system" Pod="calico-kube-controllers-69658b8f65-kmrst" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-eth0", GenerateName:"calico-kube-controllers-69658b8f65-", Namespace:"calico-system", SelfLink:"", UID:"121b3bf3-7703-467a-986e-3619eec56340", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69658b8f65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978", Pod:"calico-kube-controllers-69658b8f65-kmrst", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7e6a6357301", MAC:"fa:1c:f9:87:73:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:25.794944 containerd[1900]: 2025-11-23 23:23:25.787 [INFO][5003] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" Namespace="calico-system" Pod="calico-kube-controllers-69658b8f65-kmrst" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--kube--controllers--69658b8f65--kmrst-eth0" Nov 23 23:23:25.869450 containerd[1900]: time="2025-11-23T23:23:25.869413879Z" level=info msg="connecting to shim 0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978" address="unix:///run/containerd/s/cc890ced7cd222f695f2e34335272f15fa9aee4db9e2f436b1d2b479f698c22d" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:25.890597 systemd-networkd[1479]: cali975ecb856be: Link UP Nov 23 23:23:25.892184 systemd-networkd[1479]: cali975ecb856be: Gained carrier Nov 23 23:23:25.907418 systemd[1]: Started cri-containerd-0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978.scope - libcontainer container 0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978. Nov 23 23:23:25.912287 containerd[1900]: 2025-11-23 23:23:25.711 [INFO][5022] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-eth0 calico-apiserver-6d6799dd75- calico-apiserver 862cceb0-6c7b-4371-aa49-25852268d3a1 854 0 2025-11-23 23:22:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d6799dd75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.1-a-2a92a9cf5f calico-apiserver-6d6799dd75-7vqw6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali975ecb856be [] [] }} ContainerID="9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" Namespace="calico-apiserver" Pod="calico-apiserver-6d6799dd75-7vqw6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-" Nov 23 23:23:25.912287 containerd[1900]: 2025-11-23 23:23:25.711 [INFO][5022] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" Namespace="calico-apiserver" Pod="calico-apiserver-6d6799dd75-7vqw6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-eth0" Nov 23 23:23:25.912287 containerd[1900]: 2025-11-23 23:23:25.737 [INFO][5047] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" HandleID="k8s-pod-network.9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-eth0" Nov 23 23:23:25.912436 containerd[1900]: 2025-11-23 23:23:25.738 [INFO][5047] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" HandleID="k8s-pod-network.9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.1-a-2a92a9cf5f", "pod":"calico-apiserver-6d6799dd75-7vqw6", "timestamp":"2025-11-23 23:23:25.737962671 +0000 UTC"}, Hostname:"ci-4459.2.1-a-2a92a9cf5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:25.912436 containerd[1900]: 2025-11-23 23:23:25.738 [INFO][5047] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:25.912436 containerd[1900]: 2025-11-23 23:23:25.766 [INFO][5047] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:25.912436 containerd[1900]: 2025-11-23 23:23:25.766 [INFO][5047] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-2a92a9cf5f' Nov 23 23:23:25.912436 containerd[1900]: 2025-11-23 23:23:25.841 [INFO][5047] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.912436 containerd[1900]: 2025-11-23 23:23:25.846 [INFO][5047] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.912436 containerd[1900]: 2025-11-23 23:23:25.852 [INFO][5047] ipam/ipam.go 511: Trying affinity for 192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.912436 containerd[1900]: 2025-11-23 23:23:25.854 [INFO][5047] ipam/ipam.go 158: Attempting to load block cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.912436 containerd[1900]: 2025-11-23 23:23:25.855 [INFO][5047] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.913063 containerd[1900]: 2025-11-23 23:23:25.855 [INFO][5047] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.913063 containerd[1900]: 2025-11-23 23:23:25.857 [INFO][5047] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377 Nov 23 23:23:25.913063 containerd[1900]: 2025-11-23 23:23:25.865 [INFO][5047] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.913063 containerd[1900]: 2025-11-23 23:23:25.878 [INFO][5047] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.53.198/26] block=192.168.53.192/26 handle="k8s-pod-network.9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.913063 containerd[1900]: 2025-11-23 23:23:25.878 [INFO][5047] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.53.198/26] handle="k8s-pod-network.9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:25.913063 containerd[1900]: 2025-11-23 23:23:25.879 [INFO][5047] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:25.913063 containerd[1900]: 2025-11-23 23:23:25.879 [INFO][5047] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.53.198/26] IPv6=[] ContainerID="9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" HandleID="k8s-pod-network.9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-eth0" Nov 23 23:23:25.913169 containerd[1900]: 2025-11-23 23:23:25.883 [INFO][5022] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" Namespace="calico-apiserver" Pod="calico-apiserver-6d6799dd75-7vqw6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-eth0", GenerateName:"calico-apiserver-6d6799dd75-", Namespace:"calico-apiserver", SelfLink:"", UID:"862cceb0-6c7b-4371-aa49-25852268d3a1", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d6799dd75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"", Pod:"calico-apiserver-6d6799dd75-7vqw6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali975ecb856be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:25.913589 containerd[1900]: 2025-11-23 23:23:25.883 [INFO][5022] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.53.198/32] ContainerID="9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" Namespace="calico-apiserver" Pod="calico-apiserver-6d6799dd75-7vqw6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-eth0" Nov 23 23:23:25.913589 containerd[1900]: 2025-11-23 23:23:25.883 [INFO][5022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali975ecb856be ContainerID="9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" Namespace="calico-apiserver" Pod="calico-apiserver-6d6799dd75-7vqw6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-eth0" Nov 23 23:23:25.913589 containerd[1900]: 2025-11-23 23:23:25.893 [INFO][5022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" Namespace="calico-apiserver" Pod="calico-apiserver-6d6799dd75-7vqw6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-eth0" Nov 23 23:23:25.913683 containerd[1900]: 2025-11-23 23:23:25.894 [INFO][5022] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" Namespace="calico-apiserver" Pod="calico-apiserver-6d6799dd75-7vqw6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-eth0", GenerateName:"calico-apiserver-6d6799dd75-", Namespace:"calico-apiserver", SelfLink:"", UID:"862cceb0-6c7b-4371-aa49-25852268d3a1", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d6799dd75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377", Pod:"calico-apiserver-6d6799dd75-7vqw6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali975ecb856be", MAC:"12:15:79:e3:50:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:25.913722 containerd[1900]: 2025-11-23 23:23:25.909 [INFO][5022] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" Namespace="calico-apiserver" Pod="calico-apiserver-6d6799dd75-7vqw6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-calico--apiserver--6d6799dd75--7vqw6-eth0" Nov 23 23:23:25.957565 containerd[1900]: time="2025-11-23T23:23:25.957366790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69658b8f65-kmrst,Uid:121b3bf3-7703-467a-986e-3619eec56340,Namespace:calico-system,Attempt:0,} returns sandbox id \"0146e4bf7dd26fcdcfb59b5f26c3b332dd3df652cc7dfe0c2ed8d52282208978\"" Nov 23 23:23:25.960150 containerd[1900]: time="2025-11-23T23:23:25.960047664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:23:25.963170 containerd[1900]: time="2025-11-23T23:23:25.963133687Z" level=info msg="connecting to shim 9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377" address="unix:///run/containerd/s/c0204d63c697bedd6b2d174e7ea65cf19042b17542ec6503c4ed002cfba4873b" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:25.991555 systemd[1]: Started cri-containerd-9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377.scope - libcontainer container 9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377. Nov 23 23:23:26.002227 systemd-networkd[1479]: cali63fac9aad73: Link UP Nov 23 23:23:26.004116 systemd-networkd[1479]: cali63fac9aad73: Gained carrier Nov 23 23:23:26.022775 containerd[1900]: 2025-11-23 23:23:25.706 [INFO][5012] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-eth0 coredns-668d6bf9bc- kube-system 35c155aa-09f5-410a-80b4-7376a7871f0d 849 0 2025-11-23 23:22:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.1-a-2a92a9cf5f coredns-668d6bf9bc-msbl7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali63fac9aad73 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-msbl7" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-" Nov 23 23:23:26.022775 containerd[1900]: 2025-11-23 23:23:25.706 [INFO][5012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-msbl7" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-eth0" Nov 23 23:23:26.022775 containerd[1900]: 2025-11-23 23:23:25.742 [INFO][5045] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" HandleID="k8s-pod-network.5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-eth0" Nov 23 23:23:26.022933 containerd[1900]: 2025-11-23 23:23:25.742 [INFO][5045] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" HandleID="k8s-pod-network.5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.1-a-2a92a9cf5f", "pod":"coredns-668d6bf9bc-msbl7", "timestamp":"2025-11-23 23:23:25.742287668 +0000 UTC"}, Hostname:"ci-4459.2.1-a-2a92a9cf5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:26.022933 containerd[1900]: 2025-11-23 23:23:25.742 [INFO][5045] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:26.022933 containerd[1900]: 2025-11-23 23:23:25.881 [INFO][5045] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:26.022933 containerd[1900]: 2025-11-23 23:23:25.881 [INFO][5045] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-2a92a9cf5f' Nov 23 23:23:26.022933 containerd[1900]: 2025-11-23 23:23:25.941 [INFO][5045] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:26.022933 containerd[1900]: 2025-11-23 23:23:25.947 [INFO][5045] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:26.022933 containerd[1900]: 2025-11-23 23:23:25.956 [INFO][5045] ipam/ipam.go 511: Trying affinity for 192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:26.022933 containerd[1900]: 2025-11-23 23:23:25.960 [INFO][5045] ipam/ipam.go 158: Attempting to load block cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:26.022933 containerd[1900]: 2025-11-23 23:23:25.965 [INFO][5045] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:26.023070 containerd[1900]: 2025-11-23 23:23:25.965 [INFO][5045] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:26.023070 containerd[1900]: 2025-11-23 23:23:25.967 [INFO][5045] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e Nov 23 23:23:26.023070 containerd[1900]: 2025-11-23 23:23:25.978 [INFO][5045] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:26.023070 containerd[1900]: 2025-11-23 23:23:25.989 [INFO][5045] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.53.199/26] block=192.168.53.192/26 handle="k8s-pod-network.5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:26.023070 containerd[1900]: 2025-11-23 23:23:25.989 [INFO][5045] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.53.199/26] handle="k8s-pod-network.5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:26.023070 containerd[1900]: 2025-11-23 23:23:25.989 [INFO][5045] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:26.023070 containerd[1900]: 2025-11-23 23:23:25.989 [INFO][5045] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.53.199/26] IPv6=[] ContainerID="5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" HandleID="k8s-pod-network.5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-eth0" Nov 23 23:23:26.023167 containerd[1900]: 2025-11-23 23:23:25.998 [INFO][5012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-msbl7" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"35c155aa-09f5-410a-80b4-7376a7871f0d", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"", Pod:"coredns-668d6bf9bc-msbl7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63fac9aad73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:26.023167 containerd[1900]: 2025-11-23 23:23:25.998 [INFO][5012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.53.199/32] ContainerID="5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-msbl7" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-eth0" Nov 23 23:23:26.023167 containerd[1900]: 2025-11-23 23:23:25.998 [INFO][5012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63fac9aad73 ContainerID="5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-msbl7" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-eth0" Nov 23 23:23:26.023167 containerd[1900]: 2025-11-23 23:23:26.003 [INFO][5012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-msbl7" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-eth0" Nov 23 23:23:26.023167 containerd[1900]: 2025-11-23 23:23:26.004 [INFO][5012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-msbl7" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"35c155aa-09f5-410a-80b4-7376a7871f0d", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e", Pod:"coredns-668d6bf9bc-msbl7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63fac9aad73", MAC:"9a:88:f7:0a:dc:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:26.023167 containerd[1900]: 2025-11-23 23:23:26.018 [INFO][5012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" Namespace="kube-system" Pod="coredns-668d6bf9bc-msbl7" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--msbl7-eth0" Nov 23 23:23:26.071789 containerd[1900]: time="2025-11-23T23:23:26.071545555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d6799dd75-7vqw6,Uid:862cceb0-6c7b-4371-aa49-25852268d3a1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9e249b6c6a25ff3ce5188bd6283d7b0e8b2d2cf969b41ed9755274e3c3f58377\"" Nov 23 23:23:26.077131 containerd[1900]: time="2025-11-23T23:23:26.076973930Z" level=info msg="connecting to shim 5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e" address="unix:///run/containerd/s/ec83be13e24d29594469d41853f9ec250677aef96cb92cbb609a6e435fd5c189" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:26.101462 systemd[1]: Started cri-containerd-5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e.scope - libcontainer container 5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e. Nov 23 23:23:26.131559 containerd[1900]: time="2025-11-23T23:23:26.131515782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-msbl7,Uid:35c155aa-09f5-410a-80b4-7376a7871f0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e\"" Nov 23 23:23:26.134208 containerd[1900]: time="2025-11-23T23:23:26.134180104Z" level=info msg="CreateContainer within sandbox \"5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:23:26.155482 containerd[1900]: time="2025-11-23T23:23:26.155452694Z" level=info msg="Container 495dd8e95bfe6b2b502474a8f12adede01f5641ea07be35d29ec3015f221aa22: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:26.172852 containerd[1900]: time="2025-11-23T23:23:26.172777290Z" level=info msg="CreateContainer within sandbox \"5b3c73c03af64828552b8929467a223c68a6f2500565c47b06232b4097117e5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"495dd8e95bfe6b2b502474a8f12adede01f5641ea07be35d29ec3015f221aa22\"" Nov 23 23:23:26.173156 containerd[1900]: time="2025-11-23T23:23:26.173117829Z" level=info msg="StartContainer for \"495dd8e95bfe6b2b502474a8f12adede01f5641ea07be35d29ec3015f221aa22\"" Nov 23 23:23:26.174168 containerd[1900]: time="2025-11-23T23:23:26.174146604Z" level=info msg="connecting to shim 495dd8e95bfe6b2b502474a8f12adede01f5641ea07be35d29ec3015f221aa22" address="unix:///run/containerd/s/ec83be13e24d29594469d41853f9ec250677aef96cb92cbb609a6e435fd5c189" protocol=ttrpc version=3 Nov 23 23:23:26.188413 systemd[1]: Started cri-containerd-495dd8e95bfe6b2b502474a8f12adede01f5641ea07be35d29ec3015f221aa22.scope - libcontainer container 495dd8e95bfe6b2b502474a8f12adede01f5641ea07be35d29ec3015f221aa22. Nov 23 23:23:26.209413 containerd[1900]: time="2025-11-23T23:23:26.209390623Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:26.212934 containerd[1900]: time="2025-11-23T23:23:26.212885027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:23:26.213185 containerd[1900]: time="2025-11-23T23:23:26.213058144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:23:26.213528 kubelet[3389]: E1123 23:23:26.213365 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:23:26.213528 kubelet[3389]: E1123 23:23:26.213509 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:23:26.214344 kubelet[3389]: E1123 23:23:26.214188 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prb76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69658b8f65-kmrst_calico-system(121b3bf3-7703-467a-986e-3619eec56340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:26.214879 containerd[1900]: time="2025-11-23T23:23:26.214261405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:23:26.215375 kubelet[3389]: E1123 23:23:26.215265 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:23:26.223995 containerd[1900]: time="2025-11-23T23:23:26.223913550Z" level=info msg="StartContainer for \"495dd8e95bfe6b2b502474a8f12adede01f5641ea07be35d29ec3015f221aa22\" returns successfully" Nov 23 23:23:26.478199 containerd[1900]: time="2025-11-23T23:23:26.477938269Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:26.481163 containerd[1900]: time="2025-11-23T23:23:26.481076325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:23:26.481163 containerd[1900]: time="2025-11-23T23:23:26.481136959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:23:26.481362 kubelet[3389]: E1123 23:23:26.481290 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:26.481403 kubelet[3389]: E1123 23:23:26.481365 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:26.482734 kubelet[3389]: E1123 23:23:26.481479 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nmjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d6799dd75-7vqw6_calico-apiserver(862cceb0-6c7b-4371-aa49-25852268d3a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:26.483857 kubelet[3389]: E1123 23:23:26.483826 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:23:26.797432 kubelet[3389]: E1123 23:23:26.797147 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:23:26.802326 kubelet[3389]: E1123 23:23:26.802181 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:23:26.837688 kubelet[3389]: I1123 23:23:26.837645 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-msbl7" podStartSLOduration=40.837633210999996 podStartE2EDuration="40.837633211s" podCreationTimestamp="2025-11-23 23:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:23:26.836524265 +0000 UTC m=+47.260540164" watchObservedRunningTime="2025-11-23 23:23:26.837633211 +0000 UTC m=+47.261649086" Nov 23 23:23:27.090473 systemd-networkd[1479]: cali975ecb856be: Gained IPv6LL Nov 23 23:23:27.091264 systemd-networkd[1479]: cali63fac9aad73: Gained IPv6LL Nov 23 23:23:27.410422 systemd-networkd[1479]: cali7e6a6357301: Gained IPv6LL Nov 23 23:23:27.636424 containerd[1900]: time="2025-11-23T23:23:27.636385183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tjz6,Uid:55b22252-8f3d-48bd-88cb-ddab5e9d791f,Namespace:calico-system,Attempt:0,}" Nov 23 23:23:27.636953 containerd[1900]: time="2025-11-23T23:23:27.636806452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6mrr8,Uid:c83ee34d-1893-4fb7-89f8-9378dfe640fb,Namespace:kube-system,Attempt:0,}" Nov 23 23:23:27.741647 systemd-networkd[1479]: cali788f5719374: Link UP Nov 23 23:23:27.741805 systemd-networkd[1479]: cali788f5719374: Gained carrier Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.680 [INFO][5271] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-eth0 csi-node-driver- calico-system 55b22252-8f3d-48bd-88cb-ddab5e9d791f 734 0 2025-11-23 23:23:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.1-a-2a92a9cf5f csi-node-driver-7tjz6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali788f5719374 [] [] }} ContainerID="a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" Namespace="calico-system" Pod="csi-node-driver-7tjz6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-" Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.680 [INFO][5271] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" Namespace="calico-system" Pod="csi-node-driver-7tjz6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-eth0" Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.703 [INFO][5294] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" HandleID="k8s-pod-network.a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-eth0" Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.703 [INFO][5294] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" HandleID="k8s-pod-network.a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cafe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.1-a-2a92a9cf5f", "pod":"csi-node-driver-7tjz6", "timestamp":"2025-11-23 23:23:27.703258687 +0000 UTC"}, Hostname:"ci-4459.2.1-a-2a92a9cf5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.703 [INFO][5294] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.703 [INFO][5294] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.703 [INFO][5294] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-2a92a9cf5f' Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.709 [INFO][5294] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.712 [INFO][5294] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.716 [INFO][5294] ipam/ipam.go 511: Trying affinity for 192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.718 [INFO][5294] ipam/ipam.go 158: Attempting to load block cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.719 [INFO][5294] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.719 [INFO][5294] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.721 [INFO][5294] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229 Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.725 [INFO][5294] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.733 [INFO][5294] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.53.200/26] block=192.168.53.192/26 handle="k8s-pod-network.a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.733 [INFO][5294] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.53.200/26] handle="k8s-pod-network.a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.733 [INFO][5294] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:27.754935 containerd[1900]: 2025-11-23 23:23:27.733 [INFO][5294] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.53.200/26] IPv6=[] ContainerID="a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" HandleID="k8s-pod-network.a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-eth0" Nov 23 23:23:27.755764 containerd[1900]: 2025-11-23 23:23:27.737 [INFO][5271] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" Namespace="calico-system" Pod="csi-node-driver-7tjz6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"55b22252-8f3d-48bd-88cb-ddab5e9d791f", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"", Pod:"csi-node-driver-7tjz6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali788f5719374", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:27.755764 containerd[1900]: 2025-11-23 23:23:27.737 [INFO][5271] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.53.200/32] ContainerID="a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" Namespace="calico-system" Pod="csi-node-driver-7tjz6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-eth0" Nov 23 23:23:27.755764 containerd[1900]: 2025-11-23 23:23:27.737 [INFO][5271] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali788f5719374 ContainerID="a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" Namespace="calico-system" Pod="csi-node-driver-7tjz6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-eth0" Nov 23 23:23:27.755764 containerd[1900]: 2025-11-23 23:23:27.742 [INFO][5271] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" Namespace="calico-system" Pod="csi-node-driver-7tjz6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-eth0" Nov 23 23:23:27.755764 containerd[1900]: 2025-11-23 23:23:27.742 [INFO][5271] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" Namespace="calico-system" Pod="csi-node-driver-7tjz6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"55b22252-8f3d-48bd-88cb-ddab5e9d791f", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229", Pod:"csi-node-driver-7tjz6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali788f5719374", MAC:"d6:ed:cc:84:f8:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:27.755764 containerd[1900]: 2025-11-23 23:23:27.752 [INFO][5271] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" Namespace="calico-system" Pod="csi-node-driver-7tjz6" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-csi--node--driver--7tjz6-eth0" Nov 23 23:23:27.802312 containerd[1900]: time="2025-11-23T23:23:27.802267737Z" level=info msg="connecting to shim a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229" address="unix:///run/containerd/s/2afdd0e86581e0f5207f0227cf6ffd3f05f3212a8a84e2ee4c522eff5e8cd43a" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:27.806979 kubelet[3389]: E1123 23:23:27.806804 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:23:27.808669 kubelet[3389]: E1123 23:23:27.808619 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:23:27.847457 systemd[1]: Started cri-containerd-a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229.scope - libcontainer container a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229. Nov 23 23:23:27.879647 systemd-networkd[1479]: calibf9cbfedea4: Link UP Nov 23 23:23:27.880339 systemd-networkd[1479]: calibf9cbfedea4: Gained carrier Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.684 [INFO][5281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-eth0 coredns-668d6bf9bc- kube-system c83ee34d-1893-4fb7-89f8-9378dfe640fb 853 0 2025-11-23 23:22:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.1-a-2a92a9cf5f coredns-668d6bf9bc-6mrr8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibf9cbfedea4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mrr8" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-" Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.685 [INFO][5281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mrr8" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-eth0" Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.709 [INFO][5299] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" HandleID="k8s-pod-network.9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-eth0" Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.709 [INFO][5299] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" HandleID="k8s-pod-network.9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb5a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.1-a-2a92a9cf5f", "pod":"coredns-668d6bf9bc-6mrr8", "timestamp":"2025-11-23 23:23:27.709131539 +0000 UTC"}, Hostname:"ci-4459.2.1-a-2a92a9cf5f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.709 [INFO][5299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.733 [INFO][5299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.734 [INFO][5299] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.1-a-2a92a9cf5f' Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.811 [INFO][5299] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.830 [INFO][5299] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.843 [INFO][5299] ipam/ipam.go 511: Trying affinity for 192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.851 [INFO][5299] ipam/ipam.go 158: Attempting to load block cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.855 [INFO][5299] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.53.192/26 host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.855 [INFO][5299] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.53.192/26 handle="k8s-pod-network.9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.857 [INFO][5299] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8 Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.862 [INFO][5299] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.53.192/26 handle="k8s-pod-network.9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.871 [INFO][5299] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.53.201/26] block=192.168.53.192/26 handle="k8s-pod-network.9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.872 [INFO][5299] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.53.201/26] handle="k8s-pod-network.9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" host="ci-4459.2.1-a-2a92a9cf5f" Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.872 [INFO][5299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:23:27.898726 containerd[1900]: 2025-11-23 23:23:27.872 [INFO][5299] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.53.201/26] IPv6=[] ContainerID="9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" HandleID="k8s-pod-network.9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" Workload="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-eth0" Nov 23 23:23:27.900582 containerd[1900]: 2025-11-23 23:23:27.875 [INFO][5281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mrr8" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c83ee34d-1893-4fb7-89f8-9378dfe640fb", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"", Pod:"coredns-668d6bf9bc-6mrr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf9cbfedea4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:27.900582 containerd[1900]: 2025-11-23 23:23:27.875 [INFO][5281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.53.201/32] ContainerID="9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mrr8" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-eth0" Nov 23 23:23:27.900582 containerd[1900]: 2025-11-23 23:23:27.875 [INFO][5281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf9cbfedea4 ContainerID="9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mrr8" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-eth0" Nov 23 23:23:27.900582 containerd[1900]: 2025-11-23 23:23:27.881 [INFO][5281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mrr8" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-eth0" Nov 23 23:23:27.900582 containerd[1900]: 2025-11-23 23:23:27.882 [INFO][5281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mrr8" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c83ee34d-1893-4fb7-89f8-9378dfe640fb", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 22, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.1-a-2a92a9cf5f", ContainerID:"9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8", Pod:"coredns-668d6bf9bc-6mrr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf9cbfedea4", MAC:"86:08:6b:1e:3f:37", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:23:27.900582 containerd[1900]: 2025-11-23 23:23:27.894 [INFO][5281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" Namespace="kube-system" Pod="coredns-668d6bf9bc-6mrr8" WorkloadEndpoint="ci--4459.2.1--a--2a92a9cf5f-k8s-coredns--668d6bf9bc--6mrr8-eth0" Nov 23 23:23:27.922861 containerd[1900]: time="2025-11-23T23:23:27.922800330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7tjz6,Uid:55b22252-8f3d-48bd-88cb-ddab5e9d791f,Namespace:calico-system,Attempt:0,} returns sandbox id \"a596eb81194aa54e3d75093b6a97f4939e353777c52f915a762b9cea5f4dd229\"" Nov 23 23:23:27.925093 containerd[1900]: time="2025-11-23T23:23:27.924931683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:23:27.957708 containerd[1900]: time="2025-11-23T23:23:27.957680234Z" level=info msg="connecting to shim 9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8" address="unix:///run/containerd/s/b7e77ae8bc9c6b26c3f76c2d53a763e8980f7c5a5bc58cf33bc812039f10b928" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:23:27.991629 systemd[1]: Started cri-containerd-9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8.scope - libcontainer container 9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8. Nov 23 23:23:28.032873 containerd[1900]: time="2025-11-23T23:23:28.032847896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6mrr8,Uid:c83ee34d-1893-4fb7-89f8-9378dfe640fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8\"" Nov 23 23:23:28.035608 containerd[1900]: time="2025-11-23T23:23:28.035582292Z" level=info msg="CreateContainer within sandbox \"9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:23:28.055577 containerd[1900]: time="2025-11-23T23:23:28.055178678Z" level=info msg="Container 591b6e6731cf62238cb8ce7e1aeca17997eb69a246592f28c55f7344551b1157: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:23:28.072016 containerd[1900]: time="2025-11-23T23:23:28.071987355Z" level=info msg="CreateContainer within sandbox \"9b4609b64c7a9578aa77f81c5fe098085c0eb6ee6152ee2841c9a129285de7c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"591b6e6731cf62238cb8ce7e1aeca17997eb69a246592f28c55f7344551b1157\"" Nov 23 23:23:28.072494 containerd[1900]: time="2025-11-23T23:23:28.072435761Z" level=info msg="StartContainer for \"591b6e6731cf62238cb8ce7e1aeca17997eb69a246592f28c55f7344551b1157\"" Nov 23 23:23:28.073278 containerd[1900]: time="2025-11-23T23:23:28.073247986Z" level=info msg="connecting to shim 591b6e6731cf62238cb8ce7e1aeca17997eb69a246592f28c55f7344551b1157" address="unix:///run/containerd/s/b7e77ae8bc9c6b26c3f76c2d53a763e8980f7c5a5bc58cf33bc812039f10b928" protocol=ttrpc version=3 Nov 23 23:23:28.091399 systemd[1]: Started cri-containerd-591b6e6731cf62238cb8ce7e1aeca17997eb69a246592f28c55f7344551b1157.scope - libcontainer container 591b6e6731cf62238cb8ce7e1aeca17997eb69a246592f28c55f7344551b1157. Nov 23 23:23:28.118122 containerd[1900]: time="2025-11-23T23:23:28.118095788Z" level=info msg="StartContainer for \"591b6e6731cf62238cb8ce7e1aeca17997eb69a246592f28c55f7344551b1157\" returns successfully" Nov 23 23:23:28.159355 containerd[1900]: time="2025-11-23T23:23:28.159329943Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:28.163258 containerd[1900]: time="2025-11-23T23:23:28.163184358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:23:28.163258 containerd[1900]: time="2025-11-23T23:23:28.163224895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:23:28.163466 kubelet[3389]: E1123 23:23:28.163438 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:23:28.163567 kubelet[3389]: E1123 23:23:28.163554 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:23:28.163750 kubelet[3389]: E1123 23:23:28.163718 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfz2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tjz6_calico-system(55b22252-8f3d-48bd-88cb-ddab5e9d791f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:28.166663 containerd[1900]: time="2025-11-23T23:23:28.166644016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:23:28.401587 containerd[1900]: time="2025-11-23T23:23:28.401539875Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:28.404478 containerd[1900]: time="2025-11-23T23:23:28.404442180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:23:28.404557 containerd[1900]: time="2025-11-23T23:23:28.404517327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:23:28.405093 kubelet[3389]: E1123 23:23:28.404843 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:23:28.405093 kubelet[3389]: E1123 23:23:28.404981 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:23:28.405712 kubelet[3389]: E1123 23:23:28.405670 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfz2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tjz6_calico-system(55b22252-8f3d-48bd-88cb-ddab5e9d791f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:28.406858 kubelet[3389]: E1123 23:23:28.406810 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:23:28.434676 kubelet[3389]: I1123 23:23:28.434603 3389 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:23:28.813605 kubelet[3389]: E1123 23:23:28.813410 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:23:28.873249 kubelet[3389]: I1123 23:23:28.873200 3389 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6mrr8" podStartSLOduration=42.873185954 podStartE2EDuration="42.873185954s" podCreationTimestamp="2025-11-23 23:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:23:28.82338608 +0000 UTC m=+49.247401955" watchObservedRunningTime="2025-11-23 23:23:28.873185954 +0000 UTC m=+49.297201822" Nov 23 23:23:29.394454 systemd-networkd[1479]: cali788f5719374: Gained IPv6LL Nov 23 23:23:29.394725 systemd-networkd[1479]: calibf9cbfedea4: Gained IPv6LL Nov 23 23:23:29.818136 kubelet[3389]: E1123 23:23:29.818001 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:23:36.636684 containerd[1900]: time="2025-11-23T23:23:36.636646662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:23:36.898170 containerd[1900]: time="2025-11-23T23:23:36.898123924Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:36.900915 containerd[1900]: time="2025-11-23T23:23:36.900881391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:23:36.901058 containerd[1900]: time="2025-11-23T23:23:36.900900216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:23:36.901104 kubelet[3389]: E1123 23:23:36.901058 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:23:36.901432 kubelet[3389]: E1123 23:23:36.901112 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:23:36.901432 kubelet[3389]: E1123 23:23:36.901208 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:242e1aff532942968466fb3afe4b9750,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g64k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-697f894766-kwrfw_calico-system(76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:36.903651 containerd[1900]: time="2025-11-23T23:23:36.903627706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:23:37.179980 containerd[1900]: time="2025-11-23T23:23:37.179847100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:37.183346 containerd[1900]: time="2025-11-23T23:23:37.183307724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:23:37.183426 containerd[1900]: time="2025-11-23T23:23:37.183316341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:23:37.183584 kubelet[3389]: E1123 23:23:37.183518 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:23:37.183893 kubelet[3389]: E1123 23:23:37.183571 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:23:37.183893 kubelet[3389]: E1123 23:23:37.183690 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g64k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-697f894766-kwrfw_calico-system(76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:37.185315 kubelet[3389]: E1123 23:23:37.185142 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:23:37.637099 containerd[1900]: time="2025-11-23T23:23:37.636912583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:23:37.930445 containerd[1900]: time="2025-11-23T23:23:37.930286950Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:37.933577 containerd[1900]: time="2025-11-23T23:23:37.933485926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:23:37.933577 containerd[1900]: time="2025-11-23T23:23:37.933501863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:23:37.933767 kubelet[3389]: E1123 23:23:37.933680 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:37.934311 kubelet[3389]: E1123 23:23:37.934040 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:37.934433 kubelet[3389]: E1123 23:23:37.934398 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2hkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f55996b-nzxhv_calico-apiserver(a24d7295-0bba-4952-852f-37a344f80dea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:37.935626 kubelet[3389]: E1123 23:23:37.935589 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:23:38.637631 containerd[1900]: time="2025-11-23T23:23:38.637591925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:23:38.871304 containerd[1900]: time="2025-11-23T23:23:38.871214667Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:38.874421 containerd[1900]: time="2025-11-23T23:23:38.874393251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:23:38.874570 containerd[1900]: time="2025-11-23T23:23:38.874452125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:23:38.874601 kubelet[3389]: E1123 23:23:38.874535 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:38.874601 kubelet[3389]: E1123 23:23:38.874575 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:38.874813 kubelet[3389]: E1123 23:23:38.874695 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqlfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f55996b-dw8vv_calico-apiserver(a2c796bc-ea26-4f69-bc19-822da4c56dfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:38.876064 kubelet[3389]: E1123 23:23:38.876030 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:23:40.640458 containerd[1900]: time="2025-11-23T23:23:40.639405854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:23:40.884982 containerd[1900]: time="2025-11-23T23:23:40.884857113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:40.890070 containerd[1900]: time="2025-11-23T23:23:40.890033565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:23:40.890248 containerd[1900]: time="2025-11-23T23:23:40.890043485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:23:40.890522 kubelet[3389]: E1123 23:23:40.890379 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:40.890522 kubelet[3389]: E1123 23:23:40.890453 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:23:40.891414 kubelet[3389]: E1123 23:23:40.890719 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nmjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d6799dd75-7vqw6_calico-apiserver(862cceb0-6c7b-4371-aa49-25852268d3a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:40.891508 containerd[1900]: time="2025-11-23T23:23:40.890865110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:23:40.892923 kubelet[3389]: E1123 23:23:40.892153 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:23:41.140945 containerd[1900]: time="2025-11-23T23:23:41.140644539Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:41.143884 containerd[1900]: time="2025-11-23T23:23:41.143853076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:23:41.143946 containerd[1900]: time="2025-11-23T23:23:41.143930782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:23:41.144099 kubelet[3389]: E1123 23:23:41.144051 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:23:41.144143 kubelet[3389]: E1123 23:23:41.144113 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:23:41.144530 kubelet[3389]: E1123 23:23:41.144252 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxg8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tnnrs_calico-system(af6e60cf-c463-457e-be42-88e0f43ba038): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:41.145428 kubelet[3389]: E1123 23:23:41.145401 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:23:42.637222 containerd[1900]: time="2025-11-23T23:23:42.637170876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:23:42.881556 containerd[1900]: time="2025-11-23T23:23:42.881392914Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:42.884729 containerd[1900]: time="2025-11-23T23:23:42.884640171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:23:42.885044 containerd[1900]: time="2025-11-23T23:23:42.884717965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:23:42.885320 kubelet[3389]: E1123 23:23:42.885259 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:23:42.885599 kubelet[3389]: E1123 23:23:42.885325 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:23:42.885620 kubelet[3389]: E1123 23:23:42.885497 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prb76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69658b8f65-kmrst_calico-system(121b3bf3-7703-467a-986e-3619eec56340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:42.887428 kubelet[3389]: E1123 23:23:42.887308 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:23:44.637883 containerd[1900]: time="2025-11-23T23:23:44.637587431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:23:44.895502 containerd[1900]: time="2025-11-23T23:23:44.895389601Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:44.898842 containerd[1900]: time="2025-11-23T23:23:44.898805958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:23:44.899388 containerd[1900]: time="2025-11-23T23:23:44.898864528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:23:44.899426 kubelet[3389]: E1123 23:23:44.898957 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:23:44.899426 kubelet[3389]: E1123 23:23:44.899001 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:23:44.899426 kubelet[3389]: E1123 23:23:44.899089 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfz2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tjz6_calico-system(55b22252-8f3d-48bd-88cb-ddab5e9d791f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:44.901326 containerd[1900]: time="2025-11-23T23:23:44.901087546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:23:45.149694 containerd[1900]: time="2025-11-23T23:23:45.149575983Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:23:45.152643 containerd[1900]: time="2025-11-23T23:23:45.152604665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:23:45.152689 containerd[1900]: time="2025-11-23T23:23:45.152681763Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:23:45.152853 kubelet[3389]: E1123 23:23:45.152805 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:23:45.152904 kubelet[3389]: E1123 23:23:45.152864 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:23:45.152990 kubelet[3389]: E1123 23:23:45.152958 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfz2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tjz6_calico-system(55b22252-8f3d-48bd-88cb-ddab5e9d791f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:23:45.154123 kubelet[3389]: E1123 23:23:45.154092 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:23:49.638837 kubelet[3389]: E1123 23:23:49.638752 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:23:49.639510 kubelet[3389]: E1123 23:23:49.639074 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:23:50.637718 kubelet[3389]: E1123 23:23:50.637643 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:23:54.636767 kubelet[3389]: E1123 23:23:54.636618 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:23:56.636683 kubelet[3389]: E1123 23:23:56.636447 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:23:57.637085 kubelet[3389]: E1123 23:23:57.636997 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:24:00.637520 kubelet[3389]: E1123 23:24:00.637429 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:24:01.638499 containerd[1900]: time="2025-11-23T23:24:01.638106473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:24:01.967302 containerd[1900]: time="2025-11-23T23:24:01.967246126Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:01.970408 containerd[1900]: time="2025-11-23T23:24:01.970339087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:24:01.970408 containerd[1900]: time="2025-11-23T23:24:01.970383649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:01.970631 kubelet[3389]: E1123 23:24:01.970522 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:01.970631 kubelet[3389]: E1123 23:24:01.970570 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:01.971847 kubelet[3389]: E1123 23:24:01.971028 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqlfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f55996b-dw8vv_calico-apiserver(a2c796bc-ea26-4f69-bc19-822da4c56dfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:01.972513 kubelet[3389]: E1123 23:24:01.972463 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:24:04.637128 containerd[1900]: time="2025-11-23T23:24:04.636891159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:24:04.897403 containerd[1900]: time="2025-11-23T23:24:04.897239786Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:04.900988 containerd[1900]: time="2025-11-23T23:24:04.900842344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:24:04.900988 containerd[1900]: time="2025-11-23T23:24:04.900903554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:04.901209 kubelet[3389]: E1123 23:24:04.901158 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:04.901751 kubelet[3389]: E1123 23:24:04.901542 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:04.901751 kubelet[3389]: E1123 23:24:04.901651 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2hkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f55996b-nzxhv_calico-apiserver(a24d7295-0bba-4952-852f-37a344f80dea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:04.903697 kubelet[3389]: E1123 23:24:04.903068 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:24:05.638164 containerd[1900]: time="2025-11-23T23:24:05.638123927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:24:05.931488 containerd[1900]: time="2025-11-23T23:24:05.931382252Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:05.935795 containerd[1900]: time="2025-11-23T23:24:05.935736937Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:24:05.935908 containerd[1900]: time="2025-11-23T23:24:05.935772234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:24:05.935972 kubelet[3389]: E1123 23:24:05.935914 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:24:05.936314 kubelet[3389]: E1123 23:24:05.935979 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:24:05.936314 kubelet[3389]: E1123 23:24:05.936072 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:242e1aff532942968466fb3afe4b9750,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g64k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-697f894766-kwrfw_calico-system(76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:05.938203 containerd[1900]: time="2025-11-23T23:24:05.938143107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:24:06.222693 containerd[1900]: time="2025-11-23T23:24:06.222567514Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:06.226319 containerd[1900]: time="2025-11-23T23:24:06.226276707Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:24:06.226375 containerd[1900]: time="2025-11-23T23:24:06.226361246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:24:06.226569 kubelet[3389]: E1123 23:24:06.226526 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:24:06.226631 kubelet[3389]: E1123 23:24:06.226577 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:24:06.226711 kubelet[3389]: E1123 23:24:06.226670 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g64k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-697f894766-kwrfw_calico-system(76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:06.228671 kubelet[3389]: E1123 23:24:06.228633 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:24:06.638655 containerd[1900]: time="2025-11-23T23:24:06.638309351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:24:06.887988 containerd[1900]: time="2025-11-23T23:24:06.887942679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:06.892866 containerd[1900]: time="2025-11-23T23:24:06.892593701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:24:06.892866 containerd[1900]: time="2025-11-23T23:24:06.892672215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:06.893069 kubelet[3389]: E1123 23:24:06.892831 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:06.893233 kubelet[3389]: E1123 23:24:06.893081 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:06.893456 kubelet[3389]: E1123 23:24:06.893348 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nmjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d6799dd75-7vqw6_calico-apiserver(862cceb0-6c7b-4371-aa49-25852268d3a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:06.895006 kubelet[3389]: E1123 23:24:06.894958 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:24:10.638478 containerd[1900]: time="2025-11-23T23:24:10.638408015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:24:10.917341 containerd[1900]: time="2025-11-23T23:24:10.917268540Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:10.921052 containerd[1900]: time="2025-11-23T23:24:10.921023830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:24:10.921114 containerd[1900]: time="2025-11-23T23:24:10.921081080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:10.921198 kubelet[3389]: E1123 23:24:10.921162 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:24:10.921839 kubelet[3389]: E1123 23:24:10.921201 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:24:10.921839 kubelet[3389]: E1123 23:24:10.921335 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxg8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tnnrs_calico-system(af6e60cf-c463-457e-be42-88e0f43ba038): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:10.923122 kubelet[3389]: E1123 23:24:10.923052 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:24:11.638962 containerd[1900]: time="2025-11-23T23:24:11.638895688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:24:11.895971 containerd[1900]: time="2025-11-23T23:24:11.895921993Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:11.899511 containerd[1900]: time="2025-11-23T23:24:11.899413764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:24:11.899511 containerd[1900]: time="2025-11-23T23:24:11.899428965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:24:11.899743 kubelet[3389]: E1123 23:24:11.899696 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:24:11.899795 kubelet[3389]: E1123 23:24:11.899748 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:24:11.900071 containerd[1900]: time="2025-11-23T23:24:11.900049840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:24:11.900766 kubelet[3389]: E1123 23:24:11.900684 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfz2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tjz6_calico-system(55b22252-8f3d-48bd-88cb-ddab5e9d791f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:12.163705 containerd[1900]: time="2025-11-23T23:24:12.163313224Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:12.166539 containerd[1900]: time="2025-11-23T23:24:12.166507593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:24:12.166710 containerd[1900]: time="2025-11-23T23:24:12.166576163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:24:12.166752 kubelet[3389]: E1123 23:24:12.166692 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:24:12.166752 kubelet[3389]: E1123 23:24:12.166730 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:24:12.167502 kubelet[3389]: E1123 23:24:12.166957 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prb76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69658b8f65-kmrst_calico-system(121b3bf3-7703-467a-986e-3619eec56340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:12.167914 containerd[1900]: time="2025-11-23T23:24:12.167747383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:24:12.168752 kubelet[3389]: E1123 23:24:12.168239 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:24:12.428246 containerd[1900]: time="2025-11-23T23:24:12.427807981Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:12.430860 containerd[1900]: time="2025-11-23T23:24:12.430830361Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:24:12.430914 containerd[1900]: time="2025-11-23T23:24:12.430903380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:24:12.431062 kubelet[3389]: E1123 23:24:12.431028 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:24:12.431108 kubelet[3389]: E1123 23:24:12.431074 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:24:12.431196 kubelet[3389]: E1123 23:24:12.431160 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfz2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tjz6_calico-system(55b22252-8f3d-48bd-88cb-ddab5e9d791f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:12.432670 kubelet[3389]: E1123 23:24:12.432638 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:24:15.639608 kubelet[3389]: E1123 23:24:15.639536 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:24:17.639261 kubelet[3389]: E1123 23:24:17.639219 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:24:17.639847 kubelet[3389]: E1123 23:24:17.639807 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:24:20.636636 kubelet[3389]: E1123 23:24:20.636559 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:24:23.638341 kubelet[3389]: E1123 23:24:23.637435 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:24:25.637700 kubelet[3389]: E1123 23:24:25.637377 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:24:25.638770 kubelet[3389]: E1123 23:24:25.638240 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:24:28.635777 kubelet[3389]: E1123 23:24:28.635720 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:24:29.637713 kubelet[3389]: E1123 23:24:29.637672 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:24:29.638093 kubelet[3389]: E1123 23:24:29.637918 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:24:32.639110 kubelet[3389]: E1123 23:24:32.639028 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:24:34.638964 kubelet[3389]: E1123 23:24:34.638894 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:24:36.636719 kubelet[3389]: E1123 23:24:36.636428 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:24:39.640322 kubelet[3389]: E1123 23:24:39.638610 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:24:39.640322 kubelet[3389]: E1123 23:24:39.639505 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:24:44.636692 kubelet[3389]: E1123 23:24:44.636636 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:24:44.638214 containerd[1900]: time="2025-11-23T23:24:44.636899011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:24:44.857687 containerd[1900]: time="2025-11-23T23:24:44.857642779Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:44.860836 containerd[1900]: time="2025-11-23T23:24:44.860791275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:24:44.860915 containerd[1900]: time="2025-11-23T23:24:44.860868893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:44.861086 kubelet[3389]: E1123 23:24:44.861045 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:44.861147 kubelet[3389]: E1123 23:24:44.861093 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:44.861224 kubelet[3389]: E1123 23:24:44.861192 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqlfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f55996b-dw8vv_calico-apiserver(a2c796bc-ea26-4f69-bc19-822da4c56dfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:44.862627 kubelet[3389]: E1123 23:24:44.862599 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:24:45.637330 kubelet[3389]: E1123 23:24:45.637227 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:24:48.637778 kubelet[3389]: E1123 23:24:48.637738 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:24:49.637651 kubelet[3389]: E1123 23:24:49.637360 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:24:53.637383 containerd[1900]: time="2025-11-23T23:24:53.637344019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:24:53.811067 systemd[1]: Started sshd@7-10.200.20.35:22-10.200.16.10:59756.service - OpenSSH per-connection server daemon (10.200.16.10:59756). Nov 23 23:24:53.918504 containerd[1900]: time="2025-11-23T23:24:53.918467860Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:53.921975 containerd[1900]: time="2025-11-23T23:24:53.921924253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:24:53.922144 containerd[1900]: time="2025-11-23T23:24:53.922003488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:53.922247 kubelet[3389]: E1123 23:24:53.922216 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:53.922665 kubelet[3389]: E1123 23:24:53.922334 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:53.922872 kubelet[3389]: E1123 23:24:53.922747 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6nmjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d6799dd75-7vqw6_calico-apiserver(862cceb0-6c7b-4371-aa49-25852268d3a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:53.924058 kubelet[3389]: E1123 23:24:53.924021 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:24:54.269360 sshd[5619]: Accepted publickey for core from 10.200.16.10 port 59756 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:24:54.270917 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:24:54.277350 systemd-logind[1875]: New session 10 of user core. Nov 23 23:24:54.280414 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 23:24:54.637158 containerd[1900]: time="2025-11-23T23:24:54.637026443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:24:54.679115 sshd[5622]: Connection closed by 10.200.16.10 port 59756 Nov 23 23:24:54.679856 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Nov 23 23:24:54.684550 systemd-logind[1875]: Session 10 logged out. Waiting for processes to exit. Nov 23 23:24:54.684961 systemd[1]: sshd@7-10.200.20.35:22-10.200.16.10:59756.service: Deactivated successfully. Nov 23 23:24:54.688896 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 23:24:54.692345 systemd-logind[1875]: Removed session 10. Nov 23 23:24:54.864597 containerd[1900]: time="2025-11-23T23:24:54.864555705Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:54.867896 containerd[1900]: time="2025-11-23T23:24:54.867861742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:24:54.868030 containerd[1900]: time="2025-11-23T23:24:54.867945760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:54.868100 kubelet[3389]: E1123 23:24:54.868058 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:24:54.868144 kubelet[3389]: E1123 23:24:54.868106 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:24:54.868244 kubelet[3389]: E1123 23:24:54.868209 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxg8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-tnnrs_calico-system(af6e60cf-c463-457e-be42-88e0f43ba038): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:54.869524 kubelet[3389]: E1123 23:24:54.869442 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:24:55.637861 kubelet[3389]: E1123 23:24:55.637688 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:24:56.637170 containerd[1900]: time="2025-11-23T23:24:56.636613994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:24:56.872092 containerd[1900]: time="2025-11-23T23:24:56.872046224Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:56.875301 containerd[1900]: time="2025-11-23T23:24:56.875243337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:24:56.875301 containerd[1900]: time="2025-11-23T23:24:56.875266138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:24:56.875653 kubelet[3389]: E1123 23:24:56.875436 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:24:56.875653 kubelet[3389]: E1123 23:24:56.875484 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:24:56.875653 kubelet[3389]: E1123 23:24:56.875579 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:242e1aff532942968466fb3afe4b9750,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g64k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-697f894766-kwrfw_calico-system(76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:56.877824 containerd[1900]: time="2025-11-23T23:24:56.877722716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:24:57.135945 containerd[1900]: time="2025-11-23T23:24:57.135893149Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:57.142306 containerd[1900]: time="2025-11-23T23:24:57.142239437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:24:57.142448 containerd[1900]: time="2025-11-23T23:24:57.142246110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:24:57.142611 kubelet[3389]: E1123 23:24:57.142539 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:24:57.142611 kubelet[3389]: E1123 23:24:57.142598 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:24:57.142837 kubelet[3389]: E1123 23:24:57.142798 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g64k9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-697f894766-kwrfw_calico-system(76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:57.144713 kubelet[3389]: E1123 23:24:57.144686 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:24:57.637965 containerd[1900]: time="2025-11-23T23:24:57.637924347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:24:57.856500 containerd[1900]: time="2025-11-23T23:24:57.856444240Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:24:57.865004 containerd[1900]: time="2025-11-23T23:24:57.864887304Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:24:57.865004 containerd[1900]: time="2025-11-23T23:24:57.864979195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:24:57.865374 kubelet[3389]: E1123 23:24:57.865319 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:57.865445 kubelet[3389]: E1123 23:24:57.865391 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:24:57.865933 kubelet[3389]: E1123 23:24:57.865518 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2hkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f55996b-nzxhv_calico-apiserver(a24d7295-0bba-4952-852f-37a344f80dea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:24:57.867205 kubelet[3389]: E1123 23:24:57.867169 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:24:59.752412 systemd[1]: Started sshd@8-10.200.20.35:22-10.200.16.10:59760.service - OpenSSH per-connection server daemon (10.200.16.10:59760). Nov 23 23:25:00.170889 sshd[5674]: Accepted publickey for core from 10.200.16.10 port 59760 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:00.172711 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:00.176442 systemd-logind[1875]: New session 11 of user core. Nov 23 23:25:00.182413 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 23:25:00.525095 sshd[5677]: Connection closed by 10.200.16.10 port 59760 Nov 23 23:25:00.526082 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:00.530172 systemd[1]: sshd@8-10.200.20.35:22-10.200.16.10:59760.service: Deactivated successfully. Nov 23 23:25:00.533701 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 23:25:00.535252 systemd-logind[1875]: Session 11 logged out. Waiting for processes to exit. Nov 23 23:25:00.536989 systemd-logind[1875]: Removed session 11. Nov 23 23:25:01.637320 containerd[1900]: time="2025-11-23T23:25:01.637158810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:25:01.868560 containerd[1900]: time="2025-11-23T23:25:01.868513078Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:25:01.872154 containerd[1900]: time="2025-11-23T23:25:01.872117704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:25:01.872285 containerd[1900]: time="2025-11-23T23:25:01.872194979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:25:01.872370 kubelet[3389]: E1123 23:25:01.872327 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:25:01.872632 kubelet[3389]: E1123 23:25:01.872378 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:25:01.872632 kubelet[3389]: E1123 23:25:01.872502 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfz2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tjz6_calico-system(55b22252-8f3d-48bd-88cb-ddab5e9d791f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:25:01.875041 containerd[1900]: time="2025-11-23T23:25:01.874526455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:25:02.146017 containerd[1900]: time="2025-11-23T23:25:02.145976099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:25:02.151346 containerd[1900]: time="2025-11-23T23:25:02.151282279Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:25:02.151463 containerd[1900]: time="2025-11-23T23:25:02.151314112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:25:02.151533 kubelet[3389]: E1123 23:25:02.151494 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:25:02.151694 kubelet[3389]: E1123 23:25:02.151542 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:25:02.151694 kubelet[3389]: E1123 23:25:02.151652 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfz2v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7tjz6_calico-system(55b22252-8f3d-48bd-88cb-ddab5e9d791f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:25:02.152936 kubelet[3389]: E1123 23:25:02.152895 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:25:04.636219 containerd[1900]: time="2025-11-23T23:25:04.636172609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:25:04.851955 containerd[1900]: time="2025-11-23T23:25:04.851859046Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:25:04.855888 containerd[1900]: time="2025-11-23T23:25:04.855792746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:25:04.855888 containerd[1900]: time="2025-11-23T23:25:04.855850332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:25:04.856071 kubelet[3389]: E1123 23:25:04.856021 3389 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:25:04.856799 kubelet[3389]: E1123 23:25:04.856082 3389 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:25:04.856799 kubelet[3389]: E1123 23:25:04.856195 3389 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prb76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-69658b8f65-kmrst_calico-system(121b3bf3-7703-467a-986e-3619eec56340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:25:04.857508 kubelet[3389]: E1123 23:25:04.857466 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:25:05.606491 systemd[1]: Started sshd@9-10.200.20.35:22-10.200.16.10:51134.service - OpenSSH per-connection server daemon (10.200.16.10:51134). Nov 23 23:25:06.041978 sshd[5689]: Accepted publickey for core from 10.200.16.10 port 51134 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:06.043918 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:06.047809 systemd-logind[1875]: New session 12 of user core. Nov 23 23:25:06.053466 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 23:25:06.414187 sshd[5692]: Connection closed by 10.200.16.10 port 51134 Nov 23 23:25:06.414104 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:06.417174 systemd[1]: sshd@9-10.200.20.35:22-10.200.16.10:51134.service: Deactivated successfully. Nov 23 23:25:06.417326 systemd-logind[1875]: Session 12 logged out. Waiting for processes to exit. Nov 23 23:25:06.419190 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 23:25:06.421873 systemd-logind[1875]: Removed session 12. Nov 23 23:25:06.495669 systemd[1]: Started sshd@10-10.200.20.35:22-10.200.16.10:51140.service - OpenSSH per-connection server daemon (10.200.16.10:51140). Nov 23 23:25:06.932048 sshd[5705]: Accepted publickey for core from 10.200.16.10 port 51140 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:06.933865 sshd-session[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:06.939449 systemd-logind[1875]: New session 13 of user core. Nov 23 23:25:06.944790 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 23:25:07.367476 sshd[5708]: Connection closed by 10.200.16.10 port 51140 Nov 23 23:25:07.367317 sshd-session[5705]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:07.370766 systemd[1]: sshd@10-10.200.20.35:22-10.200.16.10:51140.service: Deactivated successfully. Nov 23 23:25:07.376402 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 23:25:07.377257 systemd-logind[1875]: Session 13 logged out. Waiting for processes to exit. Nov 23 23:25:07.380268 systemd-logind[1875]: Removed session 13. Nov 23 23:25:07.455482 systemd[1]: Started sshd@11-10.200.20.35:22-10.200.16.10:51142.service - OpenSSH per-connection server daemon (10.200.16.10:51142). Nov 23 23:25:07.637268 kubelet[3389]: E1123 23:25:07.637161 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:25:07.906683 sshd[5718]: Accepted publickey for core from 10.200.16.10 port 51142 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:07.908579 sshd-session[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:07.913413 systemd-logind[1875]: New session 14 of user core. Nov 23 23:25:07.918412 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 23:25:08.285394 sshd[5725]: Connection closed by 10.200.16.10 port 51142 Nov 23 23:25:08.286226 sshd-session[5718]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:08.290372 systemd-logind[1875]: Session 14 logged out. Waiting for processes to exit. Nov 23 23:25:08.290689 systemd[1]: sshd@11-10.200.20.35:22-10.200.16.10:51142.service: Deactivated successfully. Nov 23 23:25:08.292779 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 23:25:08.294275 systemd-logind[1875]: Removed session 14. Nov 23 23:25:08.637605 kubelet[3389]: E1123 23:25:08.637355 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:25:08.637605 kubelet[3389]: E1123 23:25:08.637483 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:25:09.639145 kubelet[3389]: E1123 23:25:09.639097 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:25:12.639334 kubelet[3389]: E1123 23:25:12.637950 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:25:13.359535 systemd[1]: Started sshd@12-10.200.20.35:22-10.200.16.10:55022.service - OpenSSH per-connection server daemon (10.200.16.10:55022). Nov 23 23:25:13.787272 sshd[5738]: Accepted publickey for core from 10.200.16.10 port 55022 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:13.788982 sshd-session[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:13.793772 systemd-logind[1875]: New session 15 of user core. Nov 23 23:25:13.798642 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 23:25:14.144682 sshd[5741]: Connection closed by 10.200.16.10 port 55022 Nov 23 23:25:14.145497 sshd-session[5738]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:14.149379 systemd-logind[1875]: Session 15 logged out. Waiting for processes to exit. Nov 23 23:25:14.149943 systemd[1]: sshd@12-10.200.20.35:22-10.200.16.10:55022.service: Deactivated successfully. Nov 23 23:25:14.152056 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 23:25:14.153416 systemd-logind[1875]: Removed session 15. Nov 23 23:25:15.637934 kubelet[3389]: E1123 23:25:15.637895 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:25:15.639109 kubelet[3389]: E1123 23:25:15.639064 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:25:18.637186 kubelet[3389]: E1123 23:25:18.636334 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:25:19.235488 systemd[1]: Started sshd@13-10.200.20.35:22-10.200.16.10:55026.service - OpenSSH per-connection server daemon (10.200.16.10:55026). Nov 23 23:25:19.701332 sshd[5755]: Accepted publickey for core from 10.200.16.10 port 55026 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:19.702329 sshd-session[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:19.706240 systemd-logind[1875]: New session 16 of user core. Nov 23 23:25:19.713419 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 23:25:20.086509 sshd[5758]: Connection closed by 10.200.16.10 port 55026 Nov 23 23:25:20.086323 sshd-session[5755]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:20.090397 systemd-logind[1875]: Session 16 logged out. Waiting for processes to exit. Nov 23 23:25:20.090918 systemd[1]: sshd@13-10.200.20.35:22-10.200.16.10:55026.service: Deactivated successfully. Nov 23 23:25:20.093474 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 23:25:20.095129 systemd-logind[1875]: Removed session 16. Nov 23 23:25:20.638239 kubelet[3389]: E1123 23:25:20.637760 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:25:21.638433 kubelet[3389]: E1123 23:25:21.638354 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:25:23.637009 kubelet[3389]: E1123 23:25:23.636721 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:25:23.637009 kubelet[3389]: E1123 23:25:23.636973 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:25:25.175806 systemd[1]: Started sshd@14-10.200.20.35:22-10.200.16.10:43966.service - OpenSSH per-connection server daemon (10.200.16.10:43966). Nov 23 23:25:25.623107 sshd[5770]: Accepted publickey for core from 10.200.16.10 port 43966 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:25.623995 sshd-session[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:25.627582 systemd-logind[1875]: New session 17 of user core. Nov 23 23:25:25.636426 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 23:25:25.990966 sshd[5773]: Connection closed by 10.200.16.10 port 43966 Nov 23 23:25:25.991455 sshd-session[5770]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:25.994751 systemd[1]: sshd@14-10.200.20.35:22-10.200.16.10:43966.service: Deactivated successfully. Nov 23 23:25:25.997176 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 23:25:25.999798 systemd-logind[1875]: Session 17 logged out. Waiting for processes to exit. Nov 23 23:25:26.001012 systemd-logind[1875]: Removed session 17. Nov 23 23:25:26.061086 systemd[1]: Started sshd@15-10.200.20.35:22-10.200.16.10:43980.service - OpenSSH per-connection server daemon (10.200.16.10:43980). Nov 23 23:25:26.483321 sshd[5786]: Accepted publickey for core from 10.200.16.10 port 43980 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:26.484430 sshd-session[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:26.488667 systemd-logind[1875]: New session 18 of user core. Nov 23 23:25:26.493613 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 23:25:26.636553 kubelet[3389]: E1123 23:25:26.636506 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:25:26.922056 sshd[5789]: Connection closed by 10.200.16.10 port 43980 Nov 23 23:25:26.921421 sshd-session[5786]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:26.924009 systemd-logind[1875]: Session 18 logged out. Waiting for processes to exit. Nov 23 23:25:26.924976 systemd[1]: sshd@15-10.200.20.35:22-10.200.16.10:43980.service: Deactivated successfully. Nov 23 23:25:26.926946 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 23:25:26.929323 systemd-logind[1875]: Removed session 18. Nov 23 23:25:26.998483 systemd[1]: Started sshd@16-10.200.20.35:22-10.200.16.10:43992.service - OpenSSH per-connection server daemon (10.200.16.10:43992). Nov 23 23:25:27.428344 sshd[5799]: Accepted publickey for core from 10.200.16.10 port 43992 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:27.429888 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:27.434916 systemd-logind[1875]: New session 19 of user core. Nov 23 23:25:27.442375 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 23:25:27.638403 kubelet[3389]: E1123 23:25:27.638358 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:25:28.280525 sshd[5802]: Connection closed by 10.200.16.10 port 43992 Nov 23 23:25:28.281004 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:28.284102 systemd[1]: sshd@16-10.200.20.35:22-10.200.16.10:43992.service: Deactivated successfully. Nov 23 23:25:28.286989 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 23:25:28.287949 systemd-logind[1875]: Session 19 logged out. Waiting for processes to exit. Nov 23 23:25:28.289807 systemd-logind[1875]: Removed session 19. Nov 23 23:25:28.382675 systemd[1]: Started sshd@17-10.200.20.35:22-10.200.16.10:44002.service - OpenSSH per-connection server daemon (10.200.16.10:44002). Nov 23 23:25:28.840564 sshd[5821]: Accepted publickey for core from 10.200.16.10 port 44002 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:28.844110 sshd-session[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:28.847566 systemd-logind[1875]: New session 20 of user core. Nov 23 23:25:28.855415 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 23 23:25:29.293495 sshd[5850]: Connection closed by 10.200.16.10 port 44002 Nov 23 23:25:29.293283 sshd-session[5821]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:29.300091 systemd[1]: sshd@17-10.200.20.35:22-10.200.16.10:44002.service: Deactivated successfully. Nov 23 23:25:29.301875 systemd[1]: session-20.scope: Deactivated successfully. Nov 23 23:25:29.303679 systemd-logind[1875]: Session 20 logged out. Waiting for processes to exit. Nov 23 23:25:29.305699 systemd-logind[1875]: Removed session 20. Nov 23 23:25:29.366087 systemd[1]: Started sshd@18-10.200.20.35:22-10.200.16.10:44004.service - OpenSSH per-connection server daemon (10.200.16.10:44004). Nov 23 23:25:29.805947 sshd[5860]: Accepted publickey for core from 10.200.16.10 port 44004 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:29.808456 sshd-session[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:29.813790 systemd-logind[1875]: New session 21 of user core. Nov 23 23:25:29.820614 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 23 23:25:30.179908 sshd[5863]: Connection closed by 10.200.16.10 port 44004 Nov 23 23:25:30.200889 sshd-session[5860]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:30.204534 systemd[1]: sshd@18-10.200.20.35:22-10.200.16.10:44004.service: Deactivated successfully. Nov 23 23:25:30.206668 systemd[1]: session-21.scope: Deactivated successfully. Nov 23 23:25:30.208974 systemd-logind[1875]: Session 21 logged out. Waiting for processes to exit. Nov 23 23:25:30.210966 systemd-logind[1875]: Removed session 21. Nov 23 23:25:31.645644 kubelet[3389]: E1123 23:25:31.645602 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:25:32.636970 kubelet[3389]: E1123 23:25:32.636813 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:25:34.636562 kubelet[3389]: E1123 23:25:34.636516 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:25:34.637497 kubelet[3389]: E1123 23:25:34.636864 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:25:35.268157 systemd[1]: Started sshd@19-10.200.20.35:22-10.200.16.10:57450.service - OpenSSH per-connection server daemon (10.200.16.10:57450). Nov 23 23:25:35.638262 kubelet[3389]: E1123 23:25:35.638099 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:25:35.725733 sshd[5877]: Accepted publickey for core from 10.200.16.10 port 57450 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:35.726871 sshd-session[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:35.731169 systemd-logind[1875]: New session 22 of user core. Nov 23 23:25:35.736506 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 23 23:25:36.100187 sshd[5880]: Connection closed by 10.200.16.10 port 57450 Nov 23 23:25:36.102492 sshd-session[5877]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:36.107056 systemd[1]: sshd@19-10.200.20.35:22-10.200.16.10:57450.service: Deactivated successfully. Nov 23 23:25:36.110070 systemd[1]: session-22.scope: Deactivated successfully. Nov 23 23:25:36.110754 systemd-logind[1875]: Session 22 logged out. Waiting for processes to exit. Nov 23 23:25:36.112332 systemd-logind[1875]: Removed session 22. Nov 23 23:25:38.636282 kubelet[3389]: E1123 23:25:38.636181 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:25:41.181859 systemd[1]: Started sshd@20-10.200.20.35:22-10.200.16.10:37410.service - OpenSSH per-connection server daemon (10.200.16.10:37410). Nov 23 23:25:41.634927 sshd[5894]: Accepted publickey for core from 10.200.16.10 port 37410 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:41.634770 sshd-session[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:41.643435 kubelet[3389]: E1123 23:25:41.641776 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:25:41.647179 systemd-logind[1875]: New session 23 of user core. Nov 23 23:25:41.650452 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 23 23:25:42.015222 sshd[5897]: Connection closed by 10.200.16.10 port 37410 Nov 23 23:25:42.015783 sshd-session[5894]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:42.019511 systemd[1]: sshd@20-10.200.20.35:22-10.200.16.10:37410.service: Deactivated successfully. Nov 23 23:25:42.021837 systemd[1]: session-23.scope: Deactivated successfully. Nov 23 23:25:42.022817 systemd-logind[1875]: Session 23 logged out. Waiting for processes to exit. Nov 23 23:25:42.024901 systemd-logind[1875]: Removed session 23. Nov 23 23:25:44.637101 kubelet[3389]: E1123 23:25:44.637056 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:25:46.636967 kubelet[3389]: E1123 23:25:46.636822 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:25:46.636967 kubelet[3389]: E1123 23:25:46.636871 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:25:47.091597 systemd[1]: Started sshd@21-10.200.20.35:22-10.200.16.10:37412.service - OpenSSH per-connection server daemon (10.200.16.10:37412). Nov 23 23:25:47.533324 sshd[5909]: Accepted publickey for core from 10.200.16.10 port 37412 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:47.534870 sshd-session[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:47.542337 systemd-logind[1875]: New session 24 of user core. Nov 23 23:25:47.547416 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 23 23:25:47.901621 sshd[5914]: Connection closed by 10.200.16.10 port 37412 Nov 23 23:25:47.903077 sshd-session[5909]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:47.907796 systemd[1]: sshd@21-10.200.20.35:22-10.200.16.10:37412.service: Deactivated successfully. Nov 23 23:25:47.910607 systemd[1]: session-24.scope: Deactivated successfully. Nov 23 23:25:47.912759 systemd-logind[1875]: Session 24 logged out. Waiting for processes to exit. Nov 23 23:25:47.914140 systemd-logind[1875]: Removed session 24. Nov 23 23:25:49.637441 kubelet[3389]: E1123 23:25:49.637304 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:25:49.638767 kubelet[3389]: E1123 23:25:49.638484 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:25:52.976133 systemd[1]: Started sshd@22-10.200.20.35:22-10.200.16.10:42968.service - OpenSSH per-connection server daemon (10.200.16.10:42968). Nov 23 23:25:53.394114 sshd[5926]: Accepted publickey for core from 10.200.16.10 port 42968 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:53.395744 sshd-session[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:53.402883 systemd-logind[1875]: New session 25 of user core. Nov 23 23:25:53.406880 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 23 23:25:53.638610 kubelet[3389]: E1123 23:25:53.638574 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:25:53.640844 kubelet[3389]: E1123 23:25:53.640775 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f" Nov 23 23:25:53.760833 sshd[5929]: Connection closed by 10.200.16.10 port 42968 Nov 23 23:25:53.762176 sshd-session[5926]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:53.765451 systemd[1]: sshd@22-10.200.20.35:22-10.200.16.10:42968.service: Deactivated successfully. Nov 23 23:25:53.767488 systemd[1]: session-25.scope: Deactivated successfully. Nov 23 23:25:53.768348 systemd-logind[1875]: Session 25 logged out. Waiting for processes to exit. Nov 23 23:25:53.770790 systemd-logind[1875]: Removed session 25. Nov 23 23:25:55.636990 kubelet[3389]: E1123 23:25:55.636881 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:25:58.847500 systemd[1]: Started sshd@23-10.200.20.35:22-10.200.16.10:42970.service - OpenSSH per-connection server daemon (10.200.16.10:42970). Nov 23 23:25:59.311280 sshd[5968]: Accepted publickey for core from 10.200.16.10 port 42970 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:25:59.312525 sshd-session[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:25:59.316604 systemd-logind[1875]: New session 26 of user core. Nov 23 23:25:59.320394 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 23 23:25:59.704834 sshd[5972]: Connection closed by 10.200.16.10 port 42970 Nov 23 23:25:59.704752 sshd-session[5968]: pam_unix(sshd:session): session closed for user core Nov 23 23:25:59.710221 systemd[1]: sshd@23-10.200.20.35:22-10.200.16.10:42970.service: Deactivated successfully. Nov 23 23:25:59.713772 systemd[1]: session-26.scope: Deactivated successfully. Nov 23 23:25:59.717389 systemd-logind[1875]: Session 26 logged out. Waiting for processes to exit. Nov 23 23:25:59.721056 systemd-logind[1875]: Removed session 26. Nov 23 23:26:00.636215 kubelet[3389]: E1123 23:26:00.636180 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-dw8vv" podUID="a2c796bc-ea26-4f69-bc19-822da4c56dfe" Nov 23 23:26:01.637837 kubelet[3389]: E1123 23:26:01.637522 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f55996b-nzxhv" podUID="a24d7295-0bba-4952-852f-37a344f80dea" Nov 23 23:26:03.637340 kubelet[3389]: E1123 23:26:03.637219 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-tnnrs" podUID="af6e60cf-c463-457e-be42-88e0f43ba038" Nov 23 23:26:03.638156 kubelet[3389]: E1123 23:26:03.637497 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d6799dd75-7vqw6" podUID="862cceb0-6c7b-4371-aa49-25852268d3a1" Nov 23 23:26:04.789489 systemd[1]: Started sshd@24-10.200.20.35:22-10.200.16.10:50784.service - OpenSSH per-connection server daemon (10.200.16.10:50784). Nov 23 23:26:05.247368 sshd[5989]: Accepted publickey for core from 10.200.16.10 port 50784 ssh2: RSA SHA256:4dPTGuAp9pOsPdi0AvhogkQigq1zBrZ6S+mh2L5UsPU Nov 23 23:26:05.248496 sshd-session[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:26:05.252167 systemd-logind[1875]: New session 27 of user core. Nov 23 23:26:05.260401 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 23 23:26:05.614802 sshd[5992]: Connection closed by 10.200.16.10 port 50784 Nov 23 23:26:05.614682 sshd-session[5989]: pam_unix(sshd:session): session closed for user core Nov 23 23:26:05.618627 systemd[1]: sshd@24-10.200.20.35:22-10.200.16.10:50784.service: Deactivated successfully. Nov 23 23:26:05.621867 systemd[1]: session-27.scope: Deactivated successfully. Nov 23 23:26:05.623737 systemd-logind[1875]: Session 27 logged out. Waiting for processes to exit. Nov 23 23:26:05.624629 systemd-logind[1875]: Removed session 27. Nov 23 23:26:05.636902 kubelet[3389]: E1123 23:26:05.636398 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-69658b8f65-kmrst" podUID="121b3bf3-7703-467a-986e-3619eec56340" Nov 23 23:26:06.636181 kubelet[3389]: E1123 23:26:06.635850 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-697f894766-kwrfw" podUID="76b4b8c0-aa90-4b9a-b362-2f2d8cafe8c7" Nov 23 23:26:06.636181 kubelet[3389]: E1123 23:26:06.635934 3389 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7tjz6" podUID="55b22252-8f3d-48bd-88cb-ddab5e9d791f"