Nov 5 15:05:24.313619 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd490] Nov 5 15:05:24.313636 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Nov 5 13:42:06 -00 2025 Nov 5 15:05:24.313643 kernel: KASLR enabled Nov 5 15:05:24.313647 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Nov 5 15:05:24.313652 kernel: printk: legacy bootconsole [pl11] enabled Nov 5 15:05:24.313656 kernel: efi: EFI v2.7 by EDK II Nov 5 15:05:24.313662 kernel: efi: ACPI 2.0=0x3f979018 SMBIOS=0x3f8a0000 SMBIOS 3.0=0x3f880000 MEMATTR=0x3e89d018 RNG=0x3f979998 MEMRESERVE=0x3db7d598 Nov 5 15:05:24.313666 kernel: random: crng init done Nov 5 15:05:24.313670 kernel: secureboot: Secure boot disabled Nov 5 15:05:24.313674 kernel: ACPI: Early table checksum verification disabled Nov 5 15:05:24.313679 kernel: ACPI: RSDP 0x000000003F979018 000024 (v02 VRTUAL) Nov 5 15:05:24.313683 kernel: ACPI: XSDT 0x000000003F979F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:05:24.313687 kernel: ACPI: FACP 0x000000003F979C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:05:24.313692 kernel: ACPI: DSDT 0x000000003F95A018 01E046 (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 5 15:05:24.313698 kernel: ACPI: DBG2 0x000000003F979B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:05:24.313702 kernel: ACPI: GTDT 0x000000003F979D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:05:24.313707 kernel: ACPI: OEM0 0x000000003F979098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:05:24.313712 kernel: ACPI: SPCR 0x000000003F979A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:05:24.313717 kernel: ACPI: APIC 0x000000003F979818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:05:24.313721 kernel: ACPI: SRAT 0x000000003F979198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:05:24.313726 kernel: ACPI: PPTT 0x000000003F979418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Nov 5 15:05:24.313730 kernel: ACPI: BGRT 0x000000003F979E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 5 15:05:24.313735 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Nov 5 15:05:24.313739 kernel: ACPI: Use ACPI SPCR as default console: No Nov 5 15:05:24.313744 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 5 15:05:24.313748 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] hotplug Nov 5 15:05:24.313753 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] hotplug Nov 5 15:05:24.313758 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 5 15:05:24.313762 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 5 15:05:24.313767 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 5 15:05:24.313771 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 5 15:05:24.313776 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 5 15:05:24.313780 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 5 15:05:24.313784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 5 15:05:24.313789 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 5 15:05:24.313793 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 5 15:05:24.313797 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x1bfffffff] -> [mem 0x00000000-0x1bfffffff] Nov 5 15:05:24.313802 kernel: NODE_DATA(0) allocated [mem 0x1bf7fea00-0x1bf805fff] Nov 5 15:05:24.313807 kernel: Zone ranges: Nov 5 15:05:24.313812 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Nov 5 15:05:24.313818 kernel: DMA32 empty Nov 5 15:05:24.313823 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Nov 5 15:05:24.313827 kernel: Device empty Nov 5 15:05:24.313833 kernel: Movable zone start for each node Nov 5 15:05:24.313838 kernel: Early memory node ranges Nov 5 15:05:24.313842 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Nov 5 15:05:24.313847 kernel: node 0: [mem 0x0000000000824000-0x000000003f38ffff] Nov 5 15:05:24.313852 kernel: node 0: [mem 0x000000003f390000-0x000000003f93ffff] Nov 5 15:05:24.313856 kernel: node 0: [mem 0x000000003f940000-0x000000003f9effff] Nov 5 15:05:24.313861 kernel: node 0: [mem 0x000000003f9f0000-0x000000003fdeffff] Nov 5 15:05:24.313865 kernel: node 0: [mem 0x000000003fdf0000-0x000000003fffffff] Nov 5 15:05:24.313870 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Nov 5 15:05:24.313876 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Nov 5 15:05:24.313880 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Nov 5 15:05:24.313885 kernel: cma: Reserved 16 MiB at 0x000000003ca00000 on node -1 Nov 5 15:05:24.313890 kernel: psci: probing for conduit method from ACPI. Nov 5 15:05:24.313894 kernel: psci: PSCIv1.3 detected in firmware. Nov 5 15:05:24.313899 kernel: psci: Using standard PSCI v0.2 function IDs Nov 5 15:05:24.313904 kernel: psci: MIGRATE_INFO_TYPE not supported. Nov 5 15:05:24.313908 kernel: psci: SMC Calling Convention v1.4 Nov 5 15:05:24.313913 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Nov 5 15:05:24.313917 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Nov 5 15:05:24.313922 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 5 15:05:24.313927 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 5 15:05:24.313932 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 5 15:05:24.313937 kernel: Detected PIPT I-cache on CPU0 Nov 5 15:05:24.313941 kernel: CPU features: detected: Address authentication (architected QARMA5 algorithm) Nov 5 15:05:24.313946 kernel: CPU features: detected: GIC system register CPU interface Nov 5 15:05:24.313951 kernel: CPU features: detected: Spectre-v4 Nov 5 15:05:24.313955 kernel: CPU features: detected: Spectre-BHB Nov 5 15:05:24.313960 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 5 15:05:24.313965 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 5 15:05:24.313969 kernel: CPU features: detected: ARM erratum 2067961 or 2054223 Nov 5 15:05:24.313974 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 5 15:05:24.313980 kernel: alternatives: applying boot alternatives Nov 5 15:05:24.313985 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=15758474ef4cace68fb389c1b75e821ab8f30d9b752a28429e0459793723ea7b Nov 5 15:05:24.313990 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 15:05:24.313995 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:05:24.314000 kernel: Fallback order for Node 0: 0 Nov 5 15:05:24.314004 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048540 Nov 5 15:05:24.314009 kernel: Policy zone: Normal Nov 5 15:05:24.314013 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:05:24.314018 kernel: software IO TLB: area num 2. Nov 5 15:05:24.314023 kernel: software IO TLB: mapped [mem 0x0000000037300000-0x000000003b300000] (64MB) Nov 5 15:05:24.314027 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 15:05:24.314033 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:05:24.314038 kernel: rcu: RCU event tracing is enabled. Nov 5 15:05:24.314043 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 15:05:24.314048 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:05:24.314052 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:05:24.314057 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:05:24.314062 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 15:05:24.314066 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:05:24.314071 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:05:24.314076 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 5 15:05:24.314081 kernel: GICv3: 960 SPIs implemented Nov 5 15:05:24.314086 kernel: GICv3: 0 Extended SPIs implemented Nov 5 15:05:24.314091 kernel: Root IRQ handler: gic_handle_irq Nov 5 15:05:24.314095 kernel: GICv3: GICv3 features: 16 PPIs, RSS Nov 5 15:05:24.314100 kernel: GICv3: GICD_CTRL.DS=0, SCR_EL3.FIQ=0 Nov 5 15:05:24.314105 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Nov 5 15:05:24.314109 kernel: ITS: No ITS available, not enabling LPIs Nov 5 15:05:24.314114 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:05:24.314119 kernel: arch_timer: cp15 timer(s) running at 1000.00MHz (virt). Nov 5 15:05:24.314124 kernel: clocksource: arch_sys_counter: mask: 0x1fffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 15:05:24.314128 kernel: sched_clock: 61 bits at 1000MHz, resolution 1ns, wraps every 4398046511103ns Nov 5 15:05:24.314133 kernel: Console: colour dummy device 80x25 Nov 5 15:05:24.314139 kernel: printk: legacy console [tty1] enabled Nov 5 15:05:24.314144 kernel: ACPI: Core revision 20240827 Nov 5 15:05:24.314149 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 2000.00 BogoMIPS (lpj=1000000) Nov 5 15:05:24.314154 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:05:24.314159 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:05:24.314164 kernel: landlock: Up and running. Nov 5 15:05:24.314169 kernel: SELinux: Initializing. Nov 5 15:05:24.314175 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:05:24.314180 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:05:24.314184 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0xa0000e, misc 0x31e1 Nov 5 15:05:24.314190 kernel: Hyper-V: Host Build 10.0.26102.1109-1-0 Nov 5 15:05:24.314198 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 5 15:05:24.314204 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:05:24.314209 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:05:24.314214 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:05:24.314219 kernel: Remapping and enabling EFI services. Nov 5 15:05:24.314225 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:05:24.314230 kernel: Detected PIPT I-cache on CPU1 Nov 5 15:05:24.314235 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Nov 5 15:05:24.314240 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd490] Nov 5 15:05:24.314246 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 15:05:24.314251 kernel: SMP: Total of 2 processors activated. Nov 5 15:05:24.314256 kernel: CPU: All CPU(s) started at EL1 Nov 5 15:05:24.314262 kernel: CPU features: detected: 32-bit EL0 Support Nov 5 15:05:24.314267 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Nov 5 15:05:24.314272 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 5 15:05:24.314277 kernel: CPU features: detected: Common not Private translations Nov 5 15:05:24.314283 kernel: CPU features: detected: CRC32 instructions Nov 5 15:05:24.314288 kernel: CPU features: detected: Generic authentication (architected QARMA5 algorithm) Nov 5 15:05:24.314293 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 5 15:05:24.314298 kernel: CPU features: detected: LSE atomic instructions Nov 5 15:05:24.314304 kernel: CPU features: detected: Privileged Access Never Nov 5 15:05:24.314309 kernel: CPU features: detected: Speculation barrier (SB) Nov 5 15:05:24.314314 kernel: CPU features: detected: TLB range maintenance instructions Nov 5 15:05:24.314320 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 5 15:05:24.314325 kernel: CPU features: detected: Scalable Vector Extension Nov 5 15:05:24.314330 kernel: alternatives: applying system-wide alternatives Nov 5 15:05:24.314335 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Nov 5 15:05:24.314340 kernel: SVE: maximum available vector length 16 bytes per vector Nov 5 15:05:24.314345 kernel: SVE: default vector length 16 bytes per vector Nov 5 15:05:24.314351 kernel: Memory: 3979448K/4194160K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 193524K reserved, 16384K cma-reserved) Nov 5 15:05:24.314403 kernel: devtmpfs: initialized Nov 5 15:05:24.314409 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:05:24.314414 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 15:05:24.314419 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 5 15:05:24.314424 kernel: 0 pages in range for non-PLT usage Nov 5 15:05:24.314430 kernel: 515056 pages in range for PLT usage Nov 5 15:05:24.314435 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:05:24.314441 kernel: SMBIOS 3.1.0 present. Nov 5 15:05:24.314447 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Nov 5 15:05:24.314452 kernel: DMI: Memory slots populated: 2/2 Nov 5 15:05:24.314457 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:05:24.314462 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 5 15:05:24.314467 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 5 15:05:24.314472 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 5 15:05:24.314478 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:05:24.314484 kernel: audit: type=2000 audit(0.059:1): state=initialized audit_enabled=0 res=1 Nov 5 15:05:24.314489 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:05:24.314494 kernel: cpuidle: using governor menu Nov 5 15:05:24.314499 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 5 15:05:24.314504 kernel: ASID allocator initialised with 32768 entries Nov 5 15:05:24.314510 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:05:24.314515 kernel: Serial: AMBA PL011 UART driver Nov 5 15:05:24.314521 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 15:05:24.314526 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 15:05:24.314531 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 5 15:05:24.314536 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 5 15:05:24.314542 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:05:24.314547 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:05:24.314552 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 5 15:05:24.314558 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 5 15:05:24.314563 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:05:24.314568 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:05:24.314573 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:05:24.314578 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:05:24.314584 kernel: ACPI: Interpreter enabled Nov 5 15:05:24.314589 kernel: ACPI: Using GIC for interrupt routing Nov 5 15:05:24.314595 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Nov 5 15:05:24.314600 kernel: printk: legacy console [ttyAMA0] enabled Nov 5 15:05:24.314605 kernel: printk: legacy bootconsole [pl11] disabled Nov 5 15:05:24.314610 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Nov 5 15:05:24.314615 kernel: ACPI: CPU0 has been hot-added Nov 5 15:05:24.314621 kernel: ACPI: CPU1 has been hot-added Nov 5 15:05:24.314626 kernel: iommu: Default domain type: Translated Nov 5 15:05:24.314632 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 5 15:05:24.314637 kernel: efivars: Registered efivars operations Nov 5 15:05:24.314642 kernel: vgaarb: loaded Nov 5 15:05:24.314647 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 5 15:05:24.314652 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:05:24.314657 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:05:24.314663 kernel: pnp: PnP ACPI init Nov 5 15:05:24.314668 kernel: pnp: PnP ACPI: found 0 devices Nov 5 15:05:24.314674 kernel: NET: Registered PF_INET protocol family Nov 5 15:05:24.314679 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 15:05:24.314684 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 15:05:24.314689 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:05:24.314694 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:05:24.314699 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 15:05:24.314706 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 15:05:24.314711 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:05:24.314716 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:05:24.314721 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:05:24.314726 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:05:24.314732 kernel: kvm [1]: HYP mode not available Nov 5 15:05:24.314737 kernel: Initialise system trusted keyrings Nov 5 15:05:24.314742 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 15:05:24.314748 kernel: Key type asymmetric registered Nov 5 15:05:24.314753 kernel: Asymmetric key parser 'x509' registered Nov 5 15:05:24.314758 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 5 15:05:24.314764 kernel: io scheduler mq-deadline registered Nov 5 15:05:24.314769 kernel: io scheduler kyber registered Nov 5 15:05:24.314774 kernel: io scheduler bfq registered Nov 5 15:05:24.314779 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:05:24.314785 kernel: thunder_xcv, ver 1.0 Nov 5 15:05:24.314790 kernel: thunder_bgx, ver 1.0 Nov 5 15:05:24.314795 kernel: nicpf, ver 1.0 Nov 5 15:05:24.314800 kernel: nicvf, ver 1.0 Nov 5 15:05:24.314933 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 5 15:05:24.315004 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-05T15:05:20 UTC (1762355120) Nov 5 15:05:24.315013 kernel: efifb: probing for efifb Nov 5 15:05:24.315018 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 5 15:05:24.315024 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 5 15:05:24.315029 kernel: efifb: scrolling: redraw Nov 5 15:05:24.315034 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 5 15:05:24.315039 kernel: Console: switching to colour frame buffer device 128x48 Nov 5 15:05:24.315044 kernel: fb0: EFI VGA frame buffer device Nov 5 15:05:24.315050 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Nov 5 15:05:24.315055 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 5 15:05:24.315061 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 5 15:05:24.315066 kernel: watchdog: NMI not fully supported Nov 5 15:05:24.315071 kernel: watchdog: Hard watchdog permanently disabled Nov 5 15:05:24.315076 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:05:24.315081 kernel: Segment Routing with IPv6 Nov 5 15:05:24.315087 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:05:24.315092 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:05:24.315097 kernel: Key type dns_resolver registered Nov 5 15:05:24.315103 kernel: registered taskstats version 1 Nov 5 15:05:24.315108 kernel: Loading compiled-in X.509 certificates Nov 5 15:05:24.315113 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 4b3babb46eb583bd8b0310732885d24e60ea58c5' Nov 5 15:05:24.315118 kernel: Demotion targets for Node 0: null Nov 5 15:05:24.315124 kernel: Key type .fscrypt registered Nov 5 15:05:24.315129 kernel: Key type fscrypt-provisioning registered Nov 5 15:05:24.315134 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:05:24.315140 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:05:24.315145 kernel: ima: No architecture policies found Nov 5 15:05:24.315150 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 5 15:05:24.315155 kernel: clk: Disabling unused clocks Nov 5 15:05:24.315160 kernel: PM: genpd: Disabling unused power domains Nov 5 15:05:24.315166 kernel: Freeing unused kernel memory: 12992K Nov 5 15:05:24.315171 kernel: Run /init as init process Nov 5 15:05:24.315176 kernel: with arguments: Nov 5 15:05:24.315181 kernel: /init Nov 5 15:05:24.315186 kernel: with environment: Nov 5 15:05:24.315191 kernel: HOME=/ Nov 5 15:05:24.315197 kernel: TERM=linux Nov 5 15:05:24.315202 kernel: hv_vmbus: Vmbus version:5.3 Nov 5 15:05:24.315207 kernel: hv_vmbus: registering driver hid_hyperv Nov 5 15:05:24.315213 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Nov 5 15:05:24.315294 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 5 15:05:24.315302 kernel: SCSI subsystem initialized Nov 5 15:05:24.315307 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 5 15:05:24.315314 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Nov 5 15:05:24.315319 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 5 15:05:24.315324 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 5 15:05:24.315330 kernel: PTP clock support registered Nov 5 15:05:24.315335 kernel: hv_utils: Registering HyperV Utility Driver Nov 5 15:05:24.315340 kernel: hv_vmbus: registering driver hv_utils Nov 5 15:05:24.315345 kernel: hv_utils: Heartbeat IC version 3.0 Nov 5 15:05:24.315351 kernel: hv_utils: Shutdown IC version 3.2 Nov 5 15:05:24.315368 kernel: hv_utils: TimeSync IC version 4.0 Nov 5 15:05:24.315373 kernel: hv_vmbus: registering driver hv_storvsc Nov 5 15:05:24.315470 kernel: scsi host0: storvsc_host_t Nov 5 15:05:24.315548 kernel: scsi host1: storvsc_host_t Nov 5 15:05:24.315632 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 5 15:05:24.315717 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 5 15:05:24.315790 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 5 15:05:24.315863 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 5 15:05:24.315935 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 5 15:05:24.316007 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 5 15:05:24.316079 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 5 15:05:24.316159 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#253 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 5 15:05:24.316226 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#196 cmd 0x5a status: scsi 0x2 srb 0x86 hv 0xc0000001 Nov 5 15:05:24.316233 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 5 15:05:24.316304 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 5 15:05:24.316385 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 5 15:05:24.316394 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 15:05:24.316466 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 5 15:05:24.316472 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:05:24.316478 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:05:24.316483 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:05:24.316489 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 5 15:05:24.316494 kernel: raid6: neonx8 gen() 18528 MB/s Nov 5 15:05:24.316500 kernel: raid6: neonx4 gen() 18561 MB/s Nov 5 15:05:24.316505 kernel: raid6: neonx2 gen() 17071 MB/s Nov 5 15:05:24.316511 kernel: raid6: neonx1 gen() 14985 MB/s Nov 5 15:05:24.316516 kernel: raid6: int64x8 gen() 10549 MB/s Nov 5 15:05:24.316521 kernel: raid6: int64x4 gen() 10614 MB/s Nov 5 15:05:24.316526 kernel: raid6: int64x2 gen() 9000 MB/s Nov 5 15:05:24.316531 kernel: raid6: int64x1 gen() 7050 MB/s Nov 5 15:05:24.316537 kernel: raid6: using algorithm neonx4 gen() 18561 MB/s Nov 5 15:05:24.316542 kernel: raid6: .... xor() 15148 MB/s, rmw enabled Nov 5 15:05:24.316547 kernel: raid6: using neon recovery algorithm Nov 5 15:05:24.316553 kernel: xor: measuring software checksum speed Nov 5 15:05:24.316558 kernel: 8regs : 28635 MB/sec Nov 5 15:05:24.316563 kernel: 32regs : 28814 MB/sec Nov 5 15:05:24.316568 kernel: arm64_neon : 37607 MB/sec Nov 5 15:05:24.316573 kernel: xor: using function: arm64_neon (37607 MB/sec) Nov 5 15:05:24.316579 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:05:24.316585 kernel: BTRFS: device fsid d8f84a83-fd8b-4c0e-831a-0d7c5ff234be devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (383) Nov 5 15:05:24.316590 kernel: BTRFS info (device dm-0): first mount of filesystem d8f84a83-fd8b-4c0e-831a-0d7c5ff234be Nov 5 15:05:24.316596 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 5 15:05:24.316601 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:05:24.316606 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:05:24.316612 kernel: loop: module loaded Nov 5 15:05:24.316618 kernel: loop0: detected capacity change from 0 to 91464 Nov 5 15:05:24.316623 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:05:24.316629 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:05:24.316637 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:05:24.316643 systemd[1]: Detected virtualization microsoft. Nov 5 15:05:24.316649 systemd[1]: Detected architecture arm64. Nov 5 15:05:24.316655 systemd[1]: Running in initrd. Nov 5 15:05:24.316660 systemd[1]: No hostname configured, using default hostname. Nov 5 15:05:24.316666 systemd[1]: Hostname set to . Nov 5 15:05:24.316677 systemd[1]: Initializing machine ID from random generator. Nov 5 15:05:24.316682 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:05:24.316688 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:05:24.316694 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:05:24.316700 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:05:24.316706 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:05:24.316712 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:05:24.316718 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:05:24.316725 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:05:24.316730 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:05:24.316736 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:05:24.316742 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:05:24.316747 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:05:24.316753 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:05:24.316758 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:05:24.316765 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:05:24.316770 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:05:24.316776 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:05:24.316782 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:05:24.316787 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:05:24.316793 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:05:24.316799 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:05:24.316809 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:05:24.316816 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:05:24.316822 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:05:24.316828 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:05:24.316833 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:05:24.316840 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:05:24.316846 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:05:24.316852 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:05:24.316858 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:05:24.316863 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:05:24.316882 systemd-journald[520]: Collecting audit messages is disabled. Nov 5 15:05:24.316898 systemd-journald[520]: Journal started Nov 5 15:05:24.316913 systemd-journald[520]: Runtime Journal (/run/log/journal/903d36daf0e24e909e7e859287a9c8c1) is 8M, max 78.3M, 70.3M free. Nov 5 15:05:24.330813 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:05:24.344818 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:05:24.340201 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:05:24.345164 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:05:24.357234 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:05:24.367561 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:05:24.393376 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:05:24.395254 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:05:24.470096 systemd-modules-load[523]: Inserted module 'br_netfilter' Nov 5 15:05:24.474609 kernel: Bridge firewalling registered Nov 5 15:05:24.470724 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:05:24.482135 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:05:24.486426 systemd-tmpfiles[533]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:05:24.492786 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:05:24.515212 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:05:24.530013 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:05:24.543980 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:05:24.552918 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:05:24.562616 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:05:24.569705 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:05:24.587167 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:05:24.664176 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:05:24.708018 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:05:24.758043 systemd-resolved[550]: Positive Trust Anchors: Nov 5 15:05:24.758055 systemd-resolved[550]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:05:24.758057 systemd-resolved[550]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:05:24.758076 systemd-resolved[550]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:05:24.793792 systemd-resolved[550]: Defaulting to hostname 'linux'. Nov 5 15:05:24.811770 dracut-cmdline[562]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=15758474ef4cace68fb389c1b75e821ab8f30d9b752a28429e0459793723ea7b Nov 5 15:05:24.794549 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:05:24.808090 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:05:24.974385 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:05:25.018387 kernel: iscsi: registered transport (tcp) Nov 5 15:05:25.047750 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:05:25.047771 kernel: QLogic iSCSI HBA Driver Nov 5 15:05:25.115397 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:05:25.135568 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:05:25.141922 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:05:25.188018 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:05:25.198075 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:05:25.203915 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:05:25.242402 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:05:25.252649 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:05:25.332032 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:05:25.347447 systemd-udevd[799]: Using default interface naming scheme 'v257'. Nov 5 15:05:25.356977 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:05:25.369125 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:05:25.387314 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:05:25.400721 dracut-pre-trigger[899]: rd.md=0: removing MD RAID activation Nov 5 15:05:25.428409 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:05:25.430803 systemd-networkd[900]: lo: Link UP Nov 5 15:05:25.430806 systemd-networkd[900]: lo: Gained carrier Nov 5 15:05:25.437757 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:05:25.444076 systemd[1]: Reached target network.target - Network. Nov 5 15:05:25.450731 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:05:25.502017 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:05:25.515214 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:05:25.589402 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#95 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 5 15:05:25.596327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:05:25.596610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:05:25.606022 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:05:25.622464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:05:25.638748 kernel: hv_vmbus: registering driver hv_netvsc Nov 5 15:05:25.654456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:05:25.706889 kernel: hv_netvsc 002248b4-0cfb-0022-48b4-0cfb002248b4 eth0: VF slot 1 added Nov 5 15:05:25.708616 systemd-networkd[900]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:05:25.708625 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:05:25.709209 systemd-networkd[900]: eth0: Link UP Nov 5 15:05:25.709276 systemd-networkd[900]: eth0: Gained carrier Nov 5 15:05:25.709284 systemd-networkd[900]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:05:25.747860 kernel: hv_vmbus: registering driver hv_pci Nov 5 15:05:25.747900 kernel: hv_pci e784b7c3-915c-4296-8bfb-fff30af8de22: PCI VMBus probing: Using version 0x10004 Nov 5 15:05:25.754407 systemd-networkd[900]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 5 15:05:25.777214 kernel: hv_pci e784b7c3-915c-4296-8bfb-fff30af8de22: PCI host bridge to bus 915c:00 Nov 5 15:05:25.777528 kernel: pci_bus 915c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Nov 5 15:05:25.777657 kernel: pci_bus 915c:00: No busn resource found for root bus, will use [bus 00-ff] Nov 5 15:05:25.777733 kernel: pci 915c:00:02.0: [15b3:101a] type 00 class 0x020000 PCIe Endpoint Nov 5 15:05:25.777779 kernel: pci 915c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 5 15:05:25.782401 kernel: pci 915c:00:02.0: enabling Extended Tags Nov 5 15:05:25.797499 kernel: pci 915c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 915c:00:02.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link) Nov 5 15:05:25.806872 kernel: pci_bus 915c:00: busn_res: [bus 00-ff] end is updated to 00 Nov 5 15:05:25.807034 kernel: pci 915c:00:02.0: BAR 0 [mem 0xfc0000000-0xfc00fffff 64bit pref]: assigned Nov 5 15:05:26.066750 kernel: mlx5_core 915c:00:02.0: enabling device (0000 -> 0002) Nov 5 15:05:26.075886 kernel: mlx5_core 915c:00:02.0: PTM is not supported by PCIe Nov 5 15:05:26.076066 kernel: mlx5_core 915c:00:02.0: firmware version: 16.30.5006 Nov 5 15:05:26.120707 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 5 15:05:26.134593 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:05:26.216145 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 5 15:05:26.235851 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 5 15:05:26.270913 kernel: hv_netvsc 002248b4-0cfb-0022-48b4-0cfb002248b4 eth0: VF registering: eth1 Nov 5 15:05:26.271116 kernel: mlx5_core 915c:00:02.0 eth1: joined to eth0 Nov 5 15:05:26.277432 kernel: mlx5_core 915c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Nov 5 15:05:26.289190 kernel: mlx5_core 915c:00:02.0 enP37212s1: renamed from eth1 Nov 5 15:05:26.289161 systemd-networkd[900]: eth1: Interface name change detected, renamed to enP37212s1. Nov 5 15:05:26.300298 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 5 15:05:26.420493 kernel: mlx5_core 915c:00:02.0 enP37212s1: Link up Nov 5 15:05:26.439406 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:05:26.464581 systemd-networkd[900]: enP37212s1: Link UP Nov 5 15:05:26.467773 kernel: hv_netvsc 002248b4-0cfb-0022-48b4-0cfb002248b4 eth0: Data path switched to VF: enP37212s1 Nov 5 15:05:26.471186 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:05:26.476125 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:05:26.485925 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:05:26.499536 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:05:26.526402 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:05:26.712811 systemd-networkd[900]: enP37212s1: Gained carrier Nov 5 15:05:27.397233 disk-uuid[1012]: Warning: The kernel is still using the old partition table. Nov 5 15:05:27.397233 disk-uuid[1012]: The new table will be used at the next reboot or after you Nov 5 15:05:27.397233 disk-uuid[1012]: run partprobe(8) or kpartx(8) Nov 5 15:05:27.397233 disk-uuid[1012]: The operation has completed successfully. Nov 5 15:05:27.402874 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:05:27.402973 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:05:27.413729 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:05:27.485245 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1173) Nov 5 15:05:27.485301 kernel: BTRFS info (device sda6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:05:27.489731 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 15:05:27.548376 kernel: BTRFS info (device sda6): turning on async discard Nov 5 15:05:27.548447 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:05:27.548601 systemd-networkd[900]: eth0: Gained IPv6LL Nov 5 15:05:27.561379 kernel: BTRFS info (device sda6): last unmount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:05:27.561777 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:05:27.566938 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:05:28.910141 ignition[1192]: Ignition 2.22.0 Nov 5 15:05:28.912772 ignition[1192]: Stage: fetch-offline Nov 5 15:05:28.912908 ignition[1192]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:05:28.916218 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:05:28.912916 ignition[1192]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:05:28.928502 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 15:05:28.912992 ignition[1192]: parsed url from cmdline: "" Nov 5 15:05:28.912995 ignition[1192]: no config URL provided Nov 5 15:05:28.912998 ignition[1192]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:05:28.913004 ignition[1192]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:05:28.913008 ignition[1192]: failed to fetch config: resource requires networking Nov 5 15:05:28.913120 ignition[1192]: Ignition finished successfully Nov 5 15:05:28.957453 ignition[1200]: Ignition 2.22.0 Nov 5 15:05:28.957458 ignition[1200]: Stage: fetch Nov 5 15:05:28.957624 ignition[1200]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:05:28.957630 ignition[1200]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:05:28.957697 ignition[1200]: parsed url from cmdline: "" Nov 5 15:05:28.957700 ignition[1200]: no config URL provided Nov 5 15:05:28.957703 ignition[1200]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:05:28.957707 ignition[1200]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:05:28.957721 ignition[1200]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 5 15:05:29.024903 ignition[1200]: GET result: OK Nov 5 15:05:29.024966 ignition[1200]: config has been read from IMDS userdata Nov 5 15:05:29.024986 ignition[1200]: parsing config with SHA512: d291308e2de7119314f141d54adc52d65336d5fa092c099722b3f746ea090da2dca7ef1795d290441eca28bbffd15b6d510c38c18dd6f88444da068b5b53d938 Nov 5 15:05:29.028304 unknown[1200]: fetched base config from "system" Nov 5 15:05:29.028623 ignition[1200]: fetch: fetch complete Nov 5 15:05:29.028309 unknown[1200]: fetched base config from "system" Nov 5 15:05:29.028627 ignition[1200]: fetch: fetch passed Nov 5 15:05:29.028322 unknown[1200]: fetched user config from "azure" Nov 5 15:05:29.028676 ignition[1200]: Ignition finished successfully Nov 5 15:05:29.030438 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 15:05:29.036151 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:05:29.070722 ignition[1206]: Ignition 2.22.0 Nov 5 15:05:29.070736 ignition[1206]: Stage: kargs Nov 5 15:05:29.070898 ignition[1206]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:05:29.077075 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:05:29.070904 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:05:29.083270 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:05:29.071384 ignition[1206]: kargs: kargs passed Nov 5 15:05:29.071418 ignition[1206]: Ignition finished successfully Nov 5 15:05:29.113321 ignition[1213]: Ignition 2.22.0 Nov 5 15:05:29.113332 ignition[1213]: Stage: disks Nov 5 15:05:29.115563 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:05:29.113506 ignition[1213]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:05:29.121671 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:05:29.113515 ignition[1213]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:05:29.128433 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:05:29.114051 ignition[1213]: disks: disks passed Nov 5 15:05:29.136921 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:05:29.114091 ignition[1213]: Ignition finished successfully Nov 5 15:05:29.145086 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:05:29.152990 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:05:29.162650 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:05:29.312403 systemd-fsck[1221]: ROOT: clean, 15/6361680 files, 408771/6359552 blocks Nov 5 15:05:29.321852 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:05:29.332789 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:05:31.440374 kernel: EXT4-fs (sda9): mounted filesystem 67ab558f-e1dc-496b-b18a-e9709809a3c4 r/w with ordered data mode. Quota mode: none. Nov 5 15:05:31.440376 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:05:31.444450 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:05:31.498155 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:05:31.520761 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:05:31.525427 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 5 15:05:31.537425 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:05:31.537456 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:05:31.549176 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:05:31.572154 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:05:31.589371 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1235) Nov 5 15:05:31.602610 kernel: BTRFS info (device sda6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:05:31.602642 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 15:05:31.613040 kernel: BTRFS info (device sda6): turning on async discard Nov 5 15:05:31.613070 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:05:31.614106 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:05:32.236707 coreos-metadata[1237]: Nov 05 15:05:32.236 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 5 15:05:32.244890 coreos-metadata[1237]: Nov 05 15:05:32.244 INFO Fetch successful Nov 5 15:05:32.249202 coreos-metadata[1237]: Nov 05 15:05:32.248 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 5 15:05:32.258426 coreos-metadata[1237]: Nov 05 15:05:32.258 INFO Fetch successful Nov 5 15:05:32.275416 coreos-metadata[1237]: Nov 05 15:05:32.275 INFO wrote hostname ci-4487.0.1-a-05c7a88322 to /sysroot/etc/hostname Nov 5 15:05:32.283656 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:05:32.523804 initrd-setup-root[1265]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:05:32.593959 initrd-setup-root[1272]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:05:32.615292 initrd-setup-root[1279]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:05:32.622294 initrd-setup-root[1286]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:05:33.930207 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:05:33.936216 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:05:33.950698 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:05:33.978468 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:05:33.988700 kernel: BTRFS info (device sda6): last unmount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:05:34.005456 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:05:34.022172 ignition[1356]: INFO : Ignition 2.22.0 Nov 5 15:05:34.022172 ignition[1356]: INFO : Stage: mount Nov 5 15:05:34.033513 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:05:34.033513 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:05:34.033513 ignition[1356]: INFO : mount: mount passed Nov 5 15:05:34.033513 ignition[1356]: INFO : Ignition finished successfully Nov 5 15:05:34.025724 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:05:34.032895 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:05:34.063042 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:05:34.085373 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (1365) Nov 5 15:05:34.096155 kernel: BTRFS info (device sda6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:05:34.096190 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 15:05:34.105709 kernel: BTRFS info (device sda6): turning on async discard Nov 5 15:05:34.105739 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:05:34.107198 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:05:34.138405 ignition[1382]: INFO : Ignition 2.22.0 Nov 5 15:05:34.138405 ignition[1382]: INFO : Stage: files Nov 5 15:05:34.144489 ignition[1382]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:05:34.144489 ignition[1382]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:05:34.144489 ignition[1382]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:05:34.158154 ignition[1382]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:05:34.158154 ignition[1382]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:05:34.262024 ignition[1382]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:05:34.267738 ignition[1382]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:05:34.267738 ignition[1382]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:05:34.262378 unknown[1382]: wrote ssh authorized keys file for user: core Nov 5 15:05:34.305885 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 15:05:34.313952 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 5 15:05:34.337978 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:05:34.474888 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 15:05:34.474888 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:05:34.490596 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:05:34.490596 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:05:34.490596 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:05:34.490596 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:05:34.490596 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:05:34.490596 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:05:34.490596 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:05:34.540540 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:05:34.540540 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:05:34.540540 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 5 15:05:34.540540 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 5 15:05:34.540540 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 5 15:05:34.540540 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Nov 5 15:05:37.881923 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 15:05:38.103378 ignition[1382]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 5 15:05:38.103378 ignition[1382]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 15:05:38.201821 ignition[1382]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:05:38.210693 ignition[1382]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:05:38.210693 ignition[1382]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 15:05:38.210693 ignition[1382]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:05:38.210693 ignition[1382]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:05:38.210693 ignition[1382]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:05:38.210693 ignition[1382]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:05:38.210693 ignition[1382]: INFO : files: files passed Nov 5 15:05:38.210693 ignition[1382]: INFO : Ignition finished successfully Nov 5 15:05:38.210863 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:05:38.224203 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:05:38.246029 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:05:38.300588 initrd-setup-root-after-ignition[1415]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:05:38.264916 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:05:38.320832 initrd-setup-root-after-ignition[1412]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:05:38.320832 initrd-setup-root-after-ignition[1412]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:05:38.264988 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:05:38.290071 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:05:38.295676 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:05:38.305888 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:05:38.345957 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:05:38.346043 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:05:38.355028 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:05:38.364344 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:05:38.372092 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:05:38.372775 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:05:38.405910 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:05:38.416738 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:05:38.434013 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:05:38.434171 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:05:38.443957 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:05:38.453373 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:05:38.461638 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:05:38.461774 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:05:38.474112 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:05:38.483414 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:05:38.490989 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:05:38.498970 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:05:38.507433 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:05:38.516797 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:05:38.526336 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:05:38.535237 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:05:38.544156 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:05:38.553111 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:05:38.561947 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:05:38.569569 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:05:38.569732 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:05:38.582212 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:05:38.591608 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:05:38.599990 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:05:38.603380 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:05:38.608943 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:05:38.609065 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:05:38.622903 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:05:38.623041 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:05:38.632236 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:05:38.632335 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:05:38.639941 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 5 15:05:38.640051 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:05:38.650472 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:05:38.663392 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:05:38.714178 ignition[1437]: INFO : Ignition 2.22.0 Nov 5 15:05:38.714178 ignition[1437]: INFO : Stage: umount Nov 5 15:05:38.714178 ignition[1437]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:05:38.714178 ignition[1437]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 5 15:05:38.714178 ignition[1437]: INFO : umount: umount passed Nov 5 15:05:38.714178 ignition[1437]: INFO : Ignition finished successfully Nov 5 15:05:38.663551 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:05:38.675457 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:05:38.696372 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:05:38.696555 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:05:38.709407 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:05:38.709529 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:05:38.719098 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:05:38.719202 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:05:38.734776 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:05:38.734859 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:05:38.745473 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:05:38.747376 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:05:38.754124 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:05:38.754162 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:05:38.763274 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:05:38.763316 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:05:38.771008 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 15:05:38.771037 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 15:05:38.780129 systemd[1]: Stopped target network.target - Network. Nov 5 15:05:38.789651 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:05:38.789705 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:05:38.799591 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:05:38.808095 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:05:38.808391 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:05:38.820967 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:05:38.828472 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:05:38.836069 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:05:38.836112 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:05:38.844017 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:05:38.844037 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:05:38.851892 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:05:38.851935 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:05:38.859731 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:05:38.859758 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:05:38.867707 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:05:38.875300 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:05:38.883846 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:05:38.884288 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:05:38.884383 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:05:38.891260 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:05:38.891349 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:05:38.901130 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:05:38.901244 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:05:38.913251 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:05:39.079030 kernel: hv_netvsc 002248b4-0cfb-0022-48b4-0cfb002248b4 eth0: Data path switched from VF: enP37212s1 Nov 5 15:05:38.913344 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:05:38.926771 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:05:38.934312 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:05:38.934345 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:05:38.944924 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:05:38.957769 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:05:38.957828 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:05:38.965823 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:05:38.965860 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:05:38.973640 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:05:38.973667 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:05:38.981668 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:05:39.008224 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:05:39.013739 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:05:39.024062 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:05:39.024129 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:05:39.031822 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:05:39.031850 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:05:39.039919 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:05:39.039951 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:05:39.056312 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:05:39.056370 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:05:39.075012 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:05:39.075059 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:05:39.093505 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:05:39.100395 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:05:39.100454 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:05:39.109745 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:05:39.109795 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:05:39.119638 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:05:39.119684 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:05:39.131054 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:05:39.131133 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:05:39.174147 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:05:39.174268 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:05:39.183903 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:05:39.193227 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:05:39.237824 systemd[1]: Switching root. Nov 5 15:05:39.348571 systemd-journald[520]: Journal stopped Nov 5 15:05:47.563162 systemd-journald[520]: Received SIGTERM from PID 1 (systemd). Nov 5 15:05:47.563182 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:05:47.563191 kernel: SELinux: policy capability open_perms=1 Nov 5 15:05:47.563198 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:05:47.563204 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:05:47.563210 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:05:47.563216 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:05:47.563222 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:05:47.563227 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:05:47.563234 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:05:47.563240 kernel: audit: type=1403 audit(1762355140.891:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:05:47.563246 systemd[1]: Successfully loaded SELinux policy in 216.286ms. Nov 5 15:05:47.563253 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.594ms. Nov 5 15:05:47.563260 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:05:47.563268 systemd[1]: Detected virtualization microsoft. Nov 5 15:05:47.563275 systemd[1]: Detected architecture arm64. Nov 5 15:05:47.563281 systemd[1]: Detected first boot. Nov 5 15:05:47.563288 systemd[1]: Hostname set to . Nov 5 15:05:47.563294 systemd[1]: Initializing machine ID from random generator. Nov 5 15:05:47.563301 zram_generator::config[1479]: No configuration found. Nov 5 15:05:47.563308 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:05:47.563314 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:05:47.563321 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:05:47.563327 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:05:47.563333 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:05:47.563341 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:05:47.563348 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:05:47.563367 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:05:47.563374 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:05:47.563381 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:05:47.563389 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:05:47.563396 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:05:47.563403 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:05:47.563409 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:05:47.563416 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:05:47.563423 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:05:47.563429 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:05:47.563437 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:05:47.563444 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:05:47.563451 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 5 15:05:47.563459 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:05:47.563466 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:05:47.563473 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:05:47.563480 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:05:47.563487 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:05:47.563493 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:05:47.563500 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:05:47.563506 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:05:47.563514 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:05:47.563520 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:05:47.563528 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:05:47.563534 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:05:47.563541 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:05:47.563547 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:05:47.563555 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:05:47.563562 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:05:47.563568 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:05:47.563575 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:05:47.563581 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:05:47.563588 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:05:47.563596 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:05:47.563603 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:05:47.563610 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:05:47.563617 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:05:47.563624 systemd[1]: Reached target machines.target - Containers. Nov 5 15:05:47.563630 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:05:47.563637 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:05:47.563645 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:05:47.563651 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:05:47.563658 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:05:47.563665 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:05:47.563671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:05:47.563678 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:05:47.563685 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:05:47.563693 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:05:47.563699 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:05:47.563706 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:05:47.563712 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:05:47.563719 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:05:47.563726 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:05:47.563734 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:05:47.563740 kernel: fuse: init (API version 7.41) Nov 5 15:05:47.563746 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:05:47.563753 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:05:47.563760 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:05:47.563767 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:05:47.563773 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:05:47.563781 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:05:47.563788 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:05:47.563794 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:05:47.563803 kernel: ACPI: bus type drm_connector registered Nov 5 15:05:47.563809 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:05:47.563815 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:05:47.563822 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:05:47.563830 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:05:47.563836 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:05:47.563843 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:05:47.563849 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:05:47.563868 systemd-journald[1576]: Collecting audit messages is disabled. Nov 5 15:05:47.563883 systemd-journald[1576]: Journal started Nov 5 15:05:47.563898 systemd-journald[1576]: Runtime Journal (/run/log/journal/23a184ff3b764389a5589a254b5954af) is 8M, max 78.3M, 70.3M free. Nov 5 15:05:46.600538 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:05:46.607876 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 5 15:05:46.608313 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:05:46.608589 systemd[1]: systemd-journald.service: Consumed 2.311s CPU time. Nov 5 15:05:47.576546 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:05:47.577415 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:05:47.577571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:05:47.582076 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:05:47.582203 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:05:47.586742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:05:47.586866 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:05:47.593732 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:05:47.593856 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:05:47.599623 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:05:47.599743 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:05:47.604547 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:05:47.609813 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:05:47.615944 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:05:47.621530 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:05:47.627315 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:05:47.639865 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:05:47.644796 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:05:47.650973 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:05:47.663127 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:05:47.667959 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:05:47.667987 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:05:47.672905 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:05:47.678208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:05:47.696234 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:05:47.709880 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:05:47.714586 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:05:47.715214 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:05:47.721648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:05:47.722835 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:05:47.730221 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:05:47.736293 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:05:47.741681 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:05:47.747408 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:05:47.755025 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:05:47.760827 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:05:47.772003 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:05:47.784442 systemd-journald[1576]: Time spent on flushing to /var/log/journal/23a184ff3b764389a5589a254b5954af is 8.598ms for 914 entries. Nov 5 15:05:47.784442 systemd-journald[1576]: System Journal (/var/log/journal/23a184ff3b764389a5589a254b5954af) is 8M, max 2.2G, 2.2G free. Nov 5 15:05:47.826195 systemd-journald[1576]: Received client request to flush runtime journal. Nov 5 15:05:47.826260 kernel: loop1: detected capacity change from 0 to 119344 Nov 5 15:05:47.827423 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:05:47.851936 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:05:47.852583 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:05:47.915833 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:05:48.418404 kernel: loop2: detected capacity change from 0 to 100624 Nov 5 15:05:48.488730 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:05:48.495029 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:05:48.500517 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:05:48.539501 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:05:48.587322 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:05:48.733639 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Nov 5 15:05:48.733978 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Nov 5 15:05:48.735568 systemd-resolved[1634]: Positive Trust Anchors: Nov 5 15:05:48.735580 systemd-resolved[1634]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:05:48.735584 systemd-resolved[1634]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:05:48.735604 systemd-resolved[1634]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:05:48.738492 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:05:48.865092 systemd-resolved[1634]: Using system hostname 'ci-4487.0.1-a-05c7a88322'. Nov 5 15:05:48.866250 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:05:48.870804 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:05:48.880198 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:05:48.886481 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:05:48.911059 systemd-udevd[1646]: Using default interface naming scheme 'v257'. Nov 5 15:05:49.011382 kernel: loop3: detected capacity change from 0 to 200800 Nov 5 15:05:49.064381 kernel: loop4: detected capacity change from 0 to 27760 Nov 5 15:05:49.665379 kernel: loop5: detected capacity change from 0 to 119344 Nov 5 15:05:49.677373 kernel: loop6: detected capacity change from 0 to 100624 Nov 5 15:05:49.689370 kernel: loop7: detected capacity change from 0 to 200800 Nov 5 15:05:49.706372 kernel: loop1: detected capacity change from 0 to 27760 Nov 5 15:05:49.708855 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:05:49.716453 (sd-merge)[1650]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-azure.raw'. Nov 5 15:05:49.718818 (sd-merge)[1650]: Merged extensions into '/usr'. Nov 5 15:05:49.720793 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:05:49.733331 systemd[1]: Reload requested from client PID 1618 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:05:49.733411 systemd[1]: Reloading... Nov 5 15:05:49.831480 zram_generator::config[1705]: No configuration found. Nov 5 15:05:49.854379 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:05:49.854453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 5 15:05:49.933396 kernel: hv_vmbus: registering driver hv_balloon Nov 5 15:05:49.933484 kernel: hv_vmbus: registering driver hyperv_fb Nov 5 15:05:49.965324 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 5 15:05:49.969451 kernel: hv_balloon: Memory hot add disabled on ARM64 Nov 5 15:05:49.978523 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 5 15:05:49.978579 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 5 15:05:49.983500 kernel: Console: switching to colour dummy device 80x25 Nov 5 15:05:49.991523 kernel: Console: switching to colour frame buffer device 128x48 Nov 5 15:05:50.064539 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 5 15:05:50.064997 systemd[1]: Reloading finished in 331 ms. Nov 5 15:05:50.081477 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:05:50.112474 systemd[1]: Starting ensure-sysext.service... Nov 5 15:05:50.116491 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:05:50.125400 kernel: MACsec IEEE 802.1AE Nov 5 15:05:50.130021 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:05:50.143259 systemd[1]: Reload requested from client PID 1798 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:05:50.143272 systemd[1]: Reloading... Nov 5 15:05:50.180382 zram_generator::config[1833]: No configuration found. Nov 5 15:05:50.200832 systemd-tmpfiles[1799]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:05:50.200851 systemd-tmpfiles[1799]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:05:50.201982 systemd-tmpfiles[1799]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:05:50.202151 systemd-tmpfiles[1799]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:05:50.203294 systemd-tmpfiles[1799]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:05:50.203535 systemd-tmpfiles[1799]: ACLs are not supported, ignoring. Nov 5 15:05:50.203582 systemd-tmpfiles[1799]: ACLs are not supported, ignoring. Nov 5 15:05:50.313005 systemd-tmpfiles[1799]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:05:50.313019 systemd-tmpfiles[1799]: Skipping /boot Nov 5 15:05:50.315732 systemd-networkd[1661]: lo: Link UP Nov 5 15:05:50.315739 systemd-networkd[1661]: lo: Gained carrier Nov 5 15:05:50.317200 systemd-networkd[1661]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:05:50.317513 systemd-networkd[1661]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:05:50.322605 systemd-tmpfiles[1799]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:05:50.322694 systemd-tmpfiles[1799]: Skipping /boot Nov 5 15:05:50.362375 kernel: mlx5_core 915c:00:02.0 enP37212s1: Link up Nov 5 15:05:50.379718 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 5 15:05:50.390442 kernel: hv_netvsc 002248b4-0cfb-0022-48b4-0cfb002248b4 eth0: Data path switched to VF: enP37212s1 Nov 5 15:05:50.390258 systemd[1]: Reloading finished in 246 ms. Nov 5 15:05:50.390987 systemd-networkd[1661]: enP37212s1: Link UP Nov 5 15:05:50.391337 systemd-networkd[1661]: eth0: Link UP Nov 5 15:05:50.391416 systemd-networkd[1661]: eth0: Gained carrier Nov 5 15:05:50.391468 systemd-networkd[1661]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:05:50.396588 systemd-networkd[1661]: enP37212s1: Gained carrier Nov 5 15:05:50.397725 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:05:50.411433 systemd-networkd[1661]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 5 15:05:50.411459 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:05:50.435375 systemd[1]: Reached target network.target - Network. Nov 5 15:05:50.440472 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:05:50.458989 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:05:50.463649 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:05:50.465192 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:05:50.470463 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:05:50.476534 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:05:50.486566 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:05:50.491602 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:05:50.492414 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:05:50.497461 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:05:50.506440 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:05:50.512561 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:05:50.518604 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:05:50.530556 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:05:50.537883 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:05:50.538097 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:05:50.543429 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:05:50.543656 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:05:50.552125 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:05:50.552386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:05:50.561282 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:05:50.564726 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:05:50.571447 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:05:50.576839 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:05:50.586232 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:05:50.593602 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:05:50.593814 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:05:50.593919 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:05:50.599044 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:05:50.604479 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:05:50.604601 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:05:50.609642 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:05:50.609777 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:05:50.614234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:05:50.614353 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:05:50.619495 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:05:50.619629 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:05:50.627915 systemd[1]: Finished ensure-sysext.service. Nov 5 15:05:50.634281 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:05:50.640899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:05:50.640957 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:05:50.792280 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:05:50.831048 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:05:51.148749 augenrules[1992]: No rules Nov 5 15:05:51.150001 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:05:51.150186 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:05:51.799041 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:05:52.184506 systemd-networkd[1661]: eth0: Gained IPv6LL Nov 5 15:05:52.186569 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:05:52.192310 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:05:53.059348 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:05:53.065024 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:05:59.803393 ldconfig[1946]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:05:59.816227 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:05:59.822782 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:05:59.849989 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:05:59.854987 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:05:59.859443 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:05:59.864643 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:05:59.870095 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:05:59.874503 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:05:59.879694 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:05:59.884780 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:05:59.884815 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:05:59.888435 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:05:59.928792 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:05:59.934507 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:05:59.939781 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:05:59.945126 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:05:59.950191 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:05:59.956457 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:05:59.961581 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:05:59.966978 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:05:59.971507 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:05:59.975460 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:05:59.979218 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:05:59.979242 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:05:59.996946 systemd[1]: Starting chronyd.service - NTP client/server... Nov 5 15:06:00.011478 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:06:00.016910 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 15:06:00.024515 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:06:00.029904 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:06:00.037690 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:06:00.045593 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:06:00.050544 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:06:00.052903 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 5 15:06:00.057330 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 5 15:06:00.058287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:06:00.064526 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:06:00.071659 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:06:00.075574 jq[2016]: false Nov 5 15:06:00.079398 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:06:00.086587 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:06:00.099307 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:06:00.105639 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:06:00.110186 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:06:00.111511 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:06:00.112923 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:06:00.118656 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:06:00.126309 KVP[2018]: KVP starting; pid is:2018 Nov 5 15:06:00.130783 extend-filesystems[2017]: Found /dev/sda6 Nov 5 15:06:00.139072 kernel: hv_utils: KVP IC version 4.0 Nov 5 15:06:00.137925 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:06:00.131564 chronyd[2008]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Nov 5 15:06:00.136849 KVP[2018]: KVP LIC Version: 3.1 Nov 5 15:06:00.144380 jq[2032]: true Nov 5 15:06:00.145886 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:06:00.146052 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:06:00.148235 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:06:00.152064 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:06:00.154378 extend-filesystems[2017]: Found /dev/sda9 Nov 5 15:06:00.163753 extend-filesystems[2017]: Checking size of /dev/sda9 Nov 5 15:06:00.167204 chronyd[2008]: Timezone right/UTC failed leap second check, ignoring Nov 5 15:06:00.168187 systemd[1]: Started chronyd.service - NTP client/server. Nov 5 15:06:00.167379 chronyd[2008]: Loaded seccomp filter (level 2) Nov 5 15:06:00.188755 update_engine[2030]: I20251105 15:06:00.188464 2030 main.cc:92] Flatcar Update Engine starting Nov 5 15:06:00.193642 (ntainerd)[2056]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:06:00.198732 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:06:00.199781 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:06:00.206894 jq[2050]: true Nov 5 15:06:00.208517 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:06:00.218396 extend-filesystems[2017]: Resized partition /dev/sda9 Nov 5 15:06:00.253405 extend-filesystems[2077]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:06:00.276559 kernel: EXT4-fs (sda9): resizing filesystem from 6359552 to 6376955 blocks Nov 5 15:06:00.276587 kernel: EXT4-fs (sda9): resized filesystem to 6376955 Nov 5 15:06:00.276597 tar[2047]: linux-arm64/LICENSE Nov 5 15:06:00.276597 tar[2047]: linux-arm64/helm Nov 5 15:06:00.284214 systemd-logind[2029]: New seat seat0. Nov 5 15:06:00.311073 systemd-logind[2029]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 5 15:06:00.311329 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:06:00.320881 extend-filesystems[2077]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 5 15:06:00.320881 extend-filesystems[2077]: old_desc_blocks = 4, new_desc_blocks = 4 Nov 5 15:06:00.320881 extend-filesystems[2077]: The filesystem on /dev/sda9 is now 6376955 (4k) blocks long. Nov 5 15:06:00.391331 extend-filesystems[2017]: Resized filesystem in /dev/sda9 Nov 5 15:06:00.327950 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:06:00.403252 bash[2100]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:06:00.328157 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:06:00.397246 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:06:00.412155 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 15:06:00.523525 sshd_keygen[2048]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:06:00.554965 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:06:00.567246 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:06:00.575461 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 5 15:06:00.597754 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:06:00.597927 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:06:00.606615 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:06:00.617130 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 5 15:06:00.628231 dbus-daemon[2011]: [system] SELinux support is enabled Nov 5 15:06:00.629323 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:06:00.637028 update_engine[2030]: I20251105 15:06:00.636842 2030 update_check_scheduler.cc:74] Next update check in 8m1s Nov 5 15:06:00.641045 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:06:00.647223 dbus-daemon[2011]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 5 15:06:00.641088 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:06:00.648150 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:06:00.648242 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:06:00.654253 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:06:00.662268 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:06:00.677271 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:06:00.691206 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:06:00.700020 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 5 15:06:00.708237 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:06:00.719205 coreos-metadata[2010]: Nov 05 15:06:00.719 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 5 15:06:00.724239 coreos-metadata[2010]: Nov 05 15:06:00.724 INFO Fetch successful Nov 5 15:06:00.725389 coreos-metadata[2010]: Nov 05 15:06:00.725 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 5 15:06:00.730100 coreos-metadata[2010]: Nov 05 15:06:00.730 INFO Fetch successful Nov 5 15:06:00.730398 coreos-metadata[2010]: Nov 05 15:06:00.730 INFO Fetching http://168.63.129.16/machine/66cc9aa4-c847-4eaf-bf3e-4969199027e5/4784748d%2D088b%2D4350%2D87b1%2Deef74db3f8c8.%5Fci%2D4487.0.1%2Da%2D05c7a88322?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 5 15:06:00.734425 coreos-metadata[2010]: Nov 05 15:06:00.734 INFO Fetch successful Nov 5 15:06:00.734591 coreos-metadata[2010]: Nov 05 15:06:00.734 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 5 15:06:00.744078 coreos-metadata[2010]: Nov 05 15:06:00.744 INFO Fetch successful Nov 5 15:06:00.763446 tar[2047]: linux-arm64/README.md Nov 5 15:06:00.775296 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 15:06:00.780230 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:06:00.785202 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:06:00.877516 locksmithd[2185]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:06:01.027834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:06:01.225175 containerd[2056]: time="2025-11-05T15:06:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:06:01.227278 containerd[2056]: time="2025-11-05T15:06:01.227241964Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:06:01.231596 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:06:01.237254 containerd[2056]: time="2025-11-05T15:06:01.236260316Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.832µs" Nov 5 15:06:01.237254 containerd[2056]: time="2025-11-05T15:06:01.236290068Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:06:01.237254 containerd[2056]: time="2025-11-05T15:06:01.236307060Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:06:01.237254 containerd[2056]: time="2025-11-05T15:06:01.236460932Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:06:01.237254 containerd[2056]: time="2025-11-05T15:06:01.236474580Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:06:01.237254 containerd[2056]: time="2025-11-05T15:06:01.236491972Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:06:01.237254 containerd[2056]: time="2025-11-05T15:06:01.236532108Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:06:01.237254 containerd[2056]: time="2025-11-05T15:06:01.236538652Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:06:01.237254 containerd[2056]: time="2025-11-05T15:06:01.236681812Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:06:01.237254 containerd[2056]: time="2025-11-05T15:06:01.236690676Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:06:01.237254 containerd[2056]: time="2025-11-05T15:06:01.236697668Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:06:01.237254 containerd[2056]: time="2025-11-05T15:06:01.236703132Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:06:01.237475 containerd[2056]: time="2025-11-05T15:06:01.236756676Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:06:01.237475 containerd[2056]: time="2025-11-05T15:06:01.236911628Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:06:01.237475 containerd[2056]: time="2025-11-05T15:06:01.236930876Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:06:01.237475 containerd[2056]: time="2025-11-05T15:06:01.236937340Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:06:01.237475 containerd[2056]: time="2025-11-05T15:06:01.236967444Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:06:01.237475 containerd[2056]: time="2025-11-05T15:06:01.237100116Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:06:01.237475 containerd[2056]: time="2025-11-05T15:06:01.237147980Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:06:01.253845 containerd[2056]: time="2025-11-05T15:06:01.253792052Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:06:01.254005 containerd[2056]: time="2025-11-05T15:06:01.253988860Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:06:01.254068 containerd[2056]: time="2025-11-05T15:06:01.254054940Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:06:01.254114 containerd[2056]: time="2025-11-05T15:06:01.254103324Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:06:01.254184 containerd[2056]: time="2025-11-05T15:06:01.254154996Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:06:01.254298 containerd[2056]: time="2025-11-05T15:06:01.254284596Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:06:01.254411 containerd[2056]: time="2025-11-05T15:06:01.254394716Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:06:01.254703 containerd[2056]: time="2025-11-05T15:06:01.254680548Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:06:01.254790 containerd[2056]: time="2025-11-05T15:06:01.254776500Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:06:01.254840 containerd[2056]: time="2025-11-05T15:06:01.254828036Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:06:01.254969 containerd[2056]: time="2025-11-05T15:06:01.254952436Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:06:01.255215 containerd[2056]: time="2025-11-05T15:06:01.255195156Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:06:01.255427 containerd[2056]: time="2025-11-05T15:06:01.255408924Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:06:01.255529 containerd[2056]: time="2025-11-05T15:06:01.255515316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:06:01.255639 containerd[2056]: time="2025-11-05T15:06:01.255620972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:06:01.255693 containerd[2056]: time="2025-11-05T15:06:01.255682732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:06:01.255729 containerd[2056]: time="2025-11-05T15:06:01.255720548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:06:01.255775 containerd[2056]: time="2025-11-05T15:06:01.255765460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:06:01.255813 containerd[2056]: time="2025-11-05T15:06:01.255804780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:06:01.255862 containerd[2056]: time="2025-11-05T15:06:01.255852908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:06:01.255902 containerd[2056]: time="2025-11-05T15:06:01.255893908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:06:01.255965 containerd[2056]: time="2025-11-05T15:06:01.255953548Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:06:01.256014 containerd[2056]: time="2025-11-05T15:06:01.256003036Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:06:01.256199 containerd[2056]: time="2025-11-05T15:06:01.256106228Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:06:01.256266 containerd[2056]: time="2025-11-05T15:06:01.256255564Z" level=info msg="Start snapshots syncer" Nov 5 15:06:01.256321 containerd[2056]: time="2025-11-05T15:06:01.256311276Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:06:01.256788 containerd[2056]: time="2025-11-05T15:06:01.256737692Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:06:01.256897 containerd[2056]: time="2025-11-05T15:06:01.256803484Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:06:01.256981 containerd[2056]: time="2025-11-05T15:06:01.256959788Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:06:01.257115 containerd[2056]: time="2025-11-05T15:06:01.257096572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:06:01.257135 containerd[2056]: time="2025-11-05T15:06:01.257122452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:06:01.257148 containerd[2056]: time="2025-11-05T15:06:01.257135292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:06:01.257148 containerd[2056]: time="2025-11-05T15:06:01.257143836Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:06:01.257181 containerd[2056]: time="2025-11-05T15:06:01.257154684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:06:01.257181 containerd[2056]: time="2025-11-05T15:06:01.257166100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:06:01.257181 containerd[2056]: time="2025-11-05T15:06:01.257175884Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:06:01.257214 containerd[2056]: time="2025-11-05T15:06:01.257198436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:06:01.257214 containerd[2056]: time="2025-11-05T15:06:01.257208812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:06:01.257243 containerd[2056]: time="2025-11-05T15:06:01.257219580Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:06:01.257260 containerd[2056]: time="2025-11-05T15:06:01.257243236Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:06:01.257273 containerd[2056]: time="2025-11-05T15:06:01.257256996Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:06:01.257273 containerd[2056]: time="2025-11-05T15:06:01.257265100Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:06:01.257296 containerd[2056]: time="2025-11-05T15:06:01.257274276Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:06:01.257296 containerd[2056]: time="2025-11-05T15:06:01.257280068Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:06:01.257296 containerd[2056]: time="2025-11-05T15:06:01.257288540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:06:01.257333 containerd[2056]: time="2025-11-05T15:06:01.257298236Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:06:01.257333 containerd[2056]: time="2025-11-05T15:06:01.257313380Z" level=info msg="runtime interface created" Nov 5 15:06:01.257333 containerd[2056]: time="2025-11-05T15:06:01.257318948Z" level=info msg="created NRI interface" Nov 5 15:06:01.257333 containerd[2056]: time="2025-11-05T15:06:01.257324868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:06:01.257402 containerd[2056]: time="2025-11-05T15:06:01.257336060Z" level=info msg="Connect containerd service" Nov 5 15:06:01.258047 containerd[2056]: time="2025-11-05T15:06:01.257669940Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:06:01.258601 containerd[2056]: time="2025-11-05T15:06:01.258577252Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:06:01.537929 kubelet[2210]: E1105 15:06:01.537810 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:06:01.539956 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:06:01.540065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:06:01.540341 systemd[1]: kubelet.service: Consumed 497ms CPU time, 248.3M memory peak. Nov 5 15:06:01.710130 containerd[2056]: time="2025-11-05T15:06:01.709974612Z" level=info msg="Start subscribing containerd event" Nov 5 15:06:01.710130 containerd[2056]: time="2025-11-05T15:06:01.710047700Z" level=info msg="Start recovering state" Nov 5 15:06:01.710277 containerd[2056]: time="2025-11-05T15:06:01.710149364Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:06:01.710449 containerd[2056]: time="2025-11-05T15:06:01.710321628Z" level=info msg="Start event monitor" Nov 5 15:06:01.710449 containerd[2056]: time="2025-11-05T15:06:01.710342124Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:06:01.710449 containerd[2056]: time="2025-11-05T15:06:01.710348244Z" level=info msg="Start streaming server" Nov 5 15:06:01.710449 containerd[2056]: time="2025-11-05T15:06:01.710375516Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:06:01.710449 containerd[2056]: time="2025-11-05T15:06:01.710382660Z" level=info msg="runtime interface starting up..." Nov 5 15:06:01.710449 containerd[2056]: time="2025-11-05T15:06:01.710386956Z" level=info msg="starting plugins..." Nov 5 15:06:01.710449 containerd[2056]: time="2025-11-05T15:06:01.710348700Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:06:01.710449 containerd[2056]: time="2025-11-05T15:06:01.710401628Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:06:01.710737 containerd[2056]: time="2025-11-05T15:06:01.710672868Z" level=info msg="containerd successfully booted in 0.485879s" Nov 5 15:06:01.710986 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:06:01.717140 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:06:01.723809 systemd[1]: Startup finished in 3.274s (kernel) + 17.762s (initrd) + 21.046s (userspace) = 42.083s. Nov 5 15:06:02.434822 login[2188]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 5 15:06:02.454061 login[2187]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:06:02.459348 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:06:02.460173 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:06:02.467569 systemd-logind[2029]: New session 2 of user core. Nov 5 15:06:02.490652 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:06:02.492823 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:06:02.516638 (systemd)[2239]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:06:02.518653 systemd-logind[2029]: New session c1 of user core. Nov 5 15:06:02.836959 systemd[2239]: Queued start job for default target default.target. Nov 5 15:06:02.845408 systemd[2239]: Created slice app.slice - User Application Slice. Nov 5 15:06:02.845432 systemd[2239]: Reached target paths.target - Paths. Nov 5 15:06:02.845461 systemd[2239]: Reached target timers.target - Timers. Nov 5 15:06:02.846770 systemd[2239]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:06:02.855711 systemd[2239]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:06:02.855992 systemd[2239]: Reached target sockets.target - Sockets. Nov 5 15:06:02.856187 systemd[2239]: Reached target basic.target - Basic System. Nov 5 15:06:02.856342 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:06:02.857566 systemd[2239]: Reached target default.target - Main User Target. Nov 5 15:06:02.857686 systemd[2239]: Startup finished in 334ms. Nov 5 15:06:02.858130 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:06:02.928964 waagent[2182]: 2025-11-05T15:06:02.928896Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Nov 5 15:06:02.933539 waagent[2182]: 2025-11-05T15:06:02.933496Z INFO Daemon Daemon OS: flatcar 4487.0.1 Nov 5 15:06:02.937327 waagent[2182]: 2025-11-05T15:06:02.937293Z INFO Daemon Daemon Python: 3.11.13 Nov 5 15:06:02.942643 waagent[2182]: 2025-11-05T15:06:02.940655Z INFO Daemon Daemon Run daemon Nov 5 15:06:02.943906 waagent[2182]: 2025-11-05T15:06:02.943876Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4487.0.1' Nov 5 15:06:02.951017 waagent[2182]: 2025-11-05T15:06:02.950979Z INFO Daemon Daemon Using waagent for provisioning Nov 5 15:06:02.954953 waagent[2182]: 2025-11-05T15:06:02.954922Z INFO Daemon Daemon Activate resource disk Nov 5 15:06:02.958953 waagent[2182]: 2025-11-05T15:06:02.958486Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 5 15:06:02.966960 waagent[2182]: 2025-11-05T15:06:02.966923Z INFO Daemon Daemon Found device: None Nov 5 15:06:02.970211 waagent[2182]: 2025-11-05T15:06:02.970182Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 5 15:06:02.976145 waagent[2182]: 2025-11-05T15:06:02.976121Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 5 15:06:02.985558 waagent[2182]: 2025-11-05T15:06:02.985458Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 5 15:06:02.989823 waagent[2182]: 2025-11-05T15:06:02.989789Z INFO Daemon Daemon Running default provisioning handler Nov 5 15:06:02.998917 waagent[2182]: 2025-11-05T15:06:02.998876Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 5 15:06:03.008876 waagent[2182]: 2025-11-05T15:06:03.008835Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 5 15:06:03.016700 waagent[2182]: 2025-11-05T15:06:03.016660Z INFO Daemon Daemon cloud-init is enabled: False Nov 5 15:06:03.020578 waagent[2182]: 2025-11-05T15:06:03.020550Z INFO Daemon Daemon Copying ovf-env.xml Nov 5 15:06:03.169143 waagent[2182]: 2025-11-05T15:06:03.168919Z INFO Daemon Daemon Successfully mounted dvd Nov 5 15:06:03.198066 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 5 15:06:03.200407 waagent[2182]: 2025-11-05T15:06:03.200319Z INFO Daemon Daemon Detect protocol endpoint Nov 5 15:06:03.204198 waagent[2182]: 2025-11-05T15:06:03.204167Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 5 15:06:03.208418 waagent[2182]: 2025-11-05T15:06:03.208393Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 5 15:06:03.213308 waagent[2182]: 2025-11-05T15:06:03.213282Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 5 15:06:03.217239 waagent[2182]: 2025-11-05T15:06:03.217213Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 5 15:06:03.221088 waagent[2182]: 2025-11-05T15:06:03.221064Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 5 15:06:03.284449 waagent[2182]: 2025-11-05T15:06:03.284412Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 5 15:06:03.289406 waagent[2182]: 2025-11-05T15:06:03.289384Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 5 15:06:03.293408 waagent[2182]: 2025-11-05T15:06:03.293384Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 5 15:06:03.436030 login[2188]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:06:03.440917 systemd-logind[2029]: New session 1 of user core. Nov 5 15:06:03.445477 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:06:03.446871 waagent[2182]: 2025-11-05T15:06:03.446447Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 5 15:06:03.455239 waagent[2182]: 2025-11-05T15:06:03.451419Z INFO Daemon Daemon Forcing an update of the goal state. Nov 5 15:06:03.463000 waagent[2182]: 2025-11-05T15:06:03.462957Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 5 15:06:03.479203 waagent[2182]: 2025-11-05T15:06:03.479166Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 5 15:06:03.483641 waagent[2182]: 2025-11-05T15:06:03.483607Z INFO Daemon Nov 5 15:06:03.485884 waagent[2182]: 2025-11-05T15:06:03.485856Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ab2d41c6-3324-4f05-ae16-b23d477e79a4 eTag: 14703310518105149509 source: Fabric] Nov 5 15:06:03.495269 waagent[2182]: 2025-11-05T15:06:03.494741Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 5 15:06:03.500028 waagent[2182]: 2025-11-05T15:06:03.499987Z INFO Daemon Nov 5 15:06:03.502971 waagent[2182]: 2025-11-05T15:06:03.502835Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 5 15:06:03.511950 waagent[2182]: 2025-11-05T15:06:03.511918Z INFO Daemon Daemon Downloading artifacts profile blob Nov 5 15:06:03.572760 waagent[2182]: 2025-11-05T15:06:03.572697Z INFO Daemon Downloaded certificate {'thumbprint': '799696D86E52CD24A8F88E085D411B68F07950D6', 'hasPrivateKey': True} Nov 5 15:06:03.580532 waagent[2182]: 2025-11-05T15:06:03.580495Z INFO Daemon Fetch goal state completed Nov 5 15:06:03.590863 waagent[2182]: 2025-11-05T15:06:03.590826Z INFO Daemon Daemon Starting provisioning Nov 5 15:06:03.594825 waagent[2182]: 2025-11-05T15:06:03.594791Z INFO Daemon Daemon Handle ovf-env.xml. Nov 5 15:06:03.598406 waagent[2182]: 2025-11-05T15:06:03.598380Z INFO Daemon Daemon Set hostname [ci-4487.0.1-a-05c7a88322] Nov 5 15:06:03.692220 waagent[2182]: 2025-11-05T15:06:03.692155Z INFO Daemon Daemon Publish hostname [ci-4487.0.1-a-05c7a88322] Nov 5 15:06:03.697320 waagent[2182]: 2025-11-05T15:06:03.697256Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 5 15:06:03.702268 waagent[2182]: 2025-11-05T15:06:03.702231Z INFO Daemon Daemon Primary interface is [eth0] Nov 5 15:06:03.712223 systemd-networkd[1661]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:06:03.712230 systemd-networkd[1661]: eth0: Reconfiguring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:06:03.712312 systemd-networkd[1661]: eth0: DHCP lease lost Nov 5 15:06:03.725470 waagent[2182]: 2025-11-05T15:06:03.725338Z INFO Daemon Daemon Create user account if not exists Nov 5 15:06:03.729970 waagent[2182]: 2025-11-05T15:06:03.729933Z INFO Daemon Daemon User core already exists, skip useradd Nov 5 15:06:03.736373 waagent[2182]: 2025-11-05T15:06:03.734440Z INFO Daemon Daemon Configure sudoer Nov 5 15:06:03.739402 systemd-networkd[1661]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 5 15:06:03.744839 waagent[2182]: 2025-11-05T15:06:03.744794Z INFO Daemon Daemon Configure sshd Nov 5 15:06:03.761079 waagent[2182]: 2025-11-05T15:06:03.761034Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 5 15:06:03.771184 waagent[2182]: 2025-11-05T15:06:03.771152Z INFO Daemon Daemon Deploy ssh public key. Nov 5 15:06:04.910905 waagent[2182]: 2025-11-05T15:06:04.907148Z INFO Daemon Daemon Provisioning complete Nov 5 15:06:04.920549 waagent[2182]: 2025-11-05T15:06:04.920347Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 5 15:06:04.925701 waagent[2182]: 2025-11-05T15:06:04.925663Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 5 15:06:04.934180 waagent[2182]: 2025-11-05T15:06:04.934142Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Nov 5 15:06:05.029879 waagent[2288]: 2025-11-05T15:06:05.029821Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Nov 5 15:06:05.031253 waagent[2288]: 2025-11-05T15:06:05.030268Z INFO ExtHandler ExtHandler OS: flatcar 4487.0.1 Nov 5 15:06:05.031253 waagent[2288]: 2025-11-05T15:06:05.030321Z INFO ExtHandler ExtHandler Python: 3.11.13 Nov 5 15:06:05.031253 waagent[2288]: 2025-11-05T15:06:05.030385Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Nov 5 15:06:05.115175 waagent[2288]: 2025-11-05T15:06:05.115113Z INFO ExtHandler ExtHandler Distro: flatcar-4487.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: aarch64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Nov 5 15:06:05.115519 waagent[2288]: 2025-11-05T15:06:05.115487Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 5 15:06:05.115648 waagent[2288]: 2025-11-05T15:06:05.115624Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 5 15:06:05.121269 waagent[2288]: 2025-11-05T15:06:05.121227Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 5 15:06:05.125974 waagent[2288]: 2025-11-05T15:06:05.125945Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 5 15:06:05.126404 waagent[2288]: 2025-11-05T15:06:05.126376Z INFO ExtHandler Nov 5 15:06:05.126540 waagent[2288]: 2025-11-05T15:06:05.126516Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e15f319d-0172-4fd0-aa94-fc66f0fba5af eTag: 14703310518105149509 source: Fabric] Nov 5 15:06:05.126835 waagent[2288]: 2025-11-05T15:06:05.126808Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 5 15:06:05.127312 waagent[2288]: 2025-11-05T15:06:05.127284Z INFO ExtHandler Nov 5 15:06:05.127447 waagent[2288]: 2025-11-05T15:06:05.127420Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 5 15:06:05.130900 waagent[2288]: 2025-11-05T15:06:05.130876Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 5 15:06:05.213686 waagent[2288]: 2025-11-05T15:06:05.213540Z INFO ExtHandler Downloaded certificate {'thumbprint': '799696D86E52CD24A8F88E085D411B68F07950D6', 'hasPrivateKey': True} Nov 5 15:06:05.214151 waagent[2288]: 2025-11-05T15:06:05.214104Z INFO ExtHandler Fetch goal state completed Nov 5 15:06:05.229020 waagent[2288]: 2025-11-05T15:06:05.228962Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Nov 5 15:06:05.232981 waagent[2288]: 2025-11-05T15:06:05.232933Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 2288 Nov 5 15:06:05.233085 waagent[2288]: 2025-11-05T15:06:05.233058Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 5 15:06:05.233339 waagent[2288]: 2025-11-05T15:06:05.233310Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Nov 5 15:06:05.234450 waagent[2288]: 2025-11-05T15:06:05.234414Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4487.0.1', '', 'Flatcar Container Linux by Kinvolk'] Nov 5 15:06:05.234774 waagent[2288]: 2025-11-05T15:06:05.234743Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4487.0.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Nov 5 15:06:05.234889 waagent[2288]: 2025-11-05T15:06:05.234866Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 5 15:06:05.235311 waagent[2288]: 2025-11-05T15:06:05.235278Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 5 15:06:05.739083 waagent[2288]: 2025-11-05T15:06:05.738719Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 5 15:06:05.739083 waagent[2288]: 2025-11-05T15:06:05.738898Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 5 15:06:05.744045 waagent[2288]: 2025-11-05T15:06:05.744025Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 5 15:06:05.750038 systemd[1]: Reload requested from client PID 2304 ('systemctl') (unit waagent.service)... Nov 5 15:06:05.750054 systemd[1]: Reloading... Nov 5 15:06:05.839416 zram_generator::config[2350]: No configuration found. Nov 5 15:06:05.974390 systemd[1]: Reloading finished in 224 ms. Nov 5 15:06:05.996418 waagent[2288]: 2025-11-05T15:06:05.994928Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 5 15:06:05.996418 waagent[2288]: 2025-11-05T15:06:05.995058Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 5 15:06:06.542387 waagent[2288]: 2025-11-05T15:06:06.541761Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 5 15:06:06.542387 waagent[2288]: 2025-11-05T15:06:06.542055Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 5 15:06:06.542720 waagent[2288]: 2025-11-05T15:06:06.542636Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 5 15:06:06.543026 waagent[2288]: 2025-11-05T15:06:06.542962Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 5 15:06:06.543074 waagent[2288]: 2025-11-05T15:06:06.543026Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 5 15:06:06.543268 waagent[2288]: 2025-11-05T15:06:06.543218Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 5 15:06:06.543389 waagent[2288]: 2025-11-05T15:06:06.543284Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 5 15:06:06.543389 waagent[2288]: 2025-11-05T15:06:06.543334Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 5 15:06:06.543570 waagent[2288]: 2025-11-05T15:06:06.543537Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 5 15:06:06.544010 waagent[2288]: 2025-11-05T15:06:06.543972Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 5 15:06:06.544188 waagent[2288]: 2025-11-05T15:06:06.544009Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 5 15:06:06.544188 waagent[2288]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 5 15:06:06.544188 waagent[2288]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Nov 5 15:06:06.544188 waagent[2288]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 5 15:06:06.544188 waagent[2288]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 5 15:06:06.544188 waagent[2288]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 5 15:06:06.544188 waagent[2288]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 5 15:06:06.544188 waagent[2288]: 2025-11-05T15:06:06.544077Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 5 15:06:06.544188 waagent[2288]: 2025-11-05T15:06:06.544128Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 5 15:06:06.544544 waagent[2288]: 2025-11-05T15:06:06.544504Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 5 15:06:06.545145 waagent[2288]: 2025-11-05T15:06:06.544999Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 5 15:06:06.545145 waagent[2288]: 2025-11-05T15:06:06.545103Z INFO EnvHandler ExtHandler Configure routes Nov 5 15:06:06.545920 waagent[2288]: 2025-11-05T15:06:06.545893Z INFO EnvHandler ExtHandler Gateway:None Nov 5 15:06:06.546030 waagent[2288]: 2025-11-05T15:06:06.546011Z INFO EnvHandler ExtHandler Routes:None Nov 5 15:06:06.549897 waagent[2288]: 2025-11-05T15:06:06.549863Z INFO ExtHandler ExtHandler Nov 5 15:06:06.550092 waagent[2288]: 2025-11-05T15:06:06.550068Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: aae9ee49-7119-4828-9124-58aac9be240d correlation 306f8ab9-85a7-4218-9f53-e8385b9020b3 created: 2025-11-05T15:04:43.435608Z] Nov 5 15:06:06.550679 waagent[2288]: 2025-11-05T15:06:06.550642Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 5 15:06:06.551580 waagent[2288]: 2025-11-05T15:06:06.551547Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Nov 5 15:06:06.621397 waagent[2288]: 2025-11-05T15:06:06.621333Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Nov 5 15:06:06.621397 waagent[2288]: Try `iptables -h' or 'iptables --help' for more information.) Nov 5 15:06:06.621697 waagent[2288]: 2025-11-05T15:06:06.621665Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: F8DF4546-06F3-47BF-99D7-D2EB25368CF0;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Nov 5 15:06:06.705520 waagent[2288]: 2025-11-05T15:06:06.705346Z INFO MonitorHandler ExtHandler Network interfaces: Nov 5 15:06:06.705520 waagent[2288]: Executing ['ip', '-a', '-o', 'link']: Nov 5 15:06:06.705520 waagent[2288]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 5 15:06:06.705520 waagent[2288]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b4:0c:fb brd ff:ff:ff:ff:ff:ff\ altname enx002248b40cfb Nov 5 15:06:06.705520 waagent[2288]: 3: enP37212s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:b4:0c:fb brd ff:ff:ff:ff:ff:ff\ altname enP37212p0s2 Nov 5 15:06:06.705520 waagent[2288]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 5 15:06:06.705520 waagent[2288]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 5 15:06:06.705520 waagent[2288]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 5 15:06:06.705520 waagent[2288]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 5 15:06:06.705520 waagent[2288]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 5 15:06:06.705520 waagent[2288]: 2: eth0 inet6 fe80::222:48ff:feb4:cfb/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 5 15:06:06.738842 waagent[2288]: 2025-11-05T15:06:06.738669Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Nov 5 15:06:06.738842 waagent[2288]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 5 15:06:06.738842 waagent[2288]: pkts bytes target prot opt in out source destination Nov 5 15:06:06.738842 waagent[2288]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 5 15:06:06.738842 waagent[2288]: pkts bytes target prot opt in out source destination Nov 5 15:06:06.738842 waagent[2288]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 5 15:06:06.738842 waagent[2288]: pkts bytes target prot opt in out source destination Nov 5 15:06:06.738842 waagent[2288]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 5 15:06:06.738842 waagent[2288]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 5 15:06:06.738842 waagent[2288]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 5 15:06:06.741813 waagent[2288]: 2025-11-05T15:06:06.741768Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 5 15:06:06.741813 waagent[2288]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 5 15:06:06.741813 waagent[2288]: pkts bytes target prot opt in out source destination Nov 5 15:06:06.741813 waagent[2288]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 5 15:06:06.741813 waagent[2288]: pkts bytes target prot opt in out source destination Nov 5 15:06:06.741813 waagent[2288]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 5 15:06:06.741813 waagent[2288]: pkts bytes target prot opt in out source destination Nov 5 15:06:06.741813 waagent[2288]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 5 15:06:06.741813 waagent[2288]: 3 364 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 5 15:06:06.741813 waagent[2288]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 5 15:06:06.741986 waagent[2288]: 2025-11-05T15:06:06.741961Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 5 15:06:11.791599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:06:11.792898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:06:11.898661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:06:11.904785 (kubelet)[2439]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:06:12.000151 kubelet[2439]: E1105 15:06:12.000078 2439 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:06:12.002673 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:06:12.002788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:06:12.004444 systemd[1]: kubelet.service: Consumed 115ms CPU time, 106M memory peak. Nov 5 15:06:22.235612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 15:06:22.236965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:06:22.378588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:06:22.381269 (kubelet)[2454]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:06:22.405902 kubelet[2454]: E1105 15:06:22.405837 2454 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:06:22.407951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:06:22.408164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:06:22.408661 systemd[1]: kubelet.service: Consumed 101ms CPU time, 105.2M memory peak. Nov 5 15:06:23.972312 chronyd[2008]: Selected source PHC0 Nov 5 15:06:30.174307 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:06:30.175107 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:39028.service - OpenSSH per-connection server daemon (10.200.16.10:39028). Nov 5 15:06:30.856747 sshd[2462]: Accepted publickey for core from 10.200.16.10 port 39028 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:06:30.857758 sshd-session[2462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:06:30.862238 systemd-logind[2029]: New session 3 of user core. Nov 5 15:06:30.872494 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:06:31.257884 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:39032.service - OpenSSH per-connection server daemon (10.200.16.10:39032). Nov 5 15:06:31.712738 sshd[2468]: Accepted publickey for core from 10.200.16.10 port 39032 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:06:31.713835 sshd-session[2468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:06:31.717295 systemd-logind[2029]: New session 4 of user core. Nov 5 15:06:31.724537 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:06:32.039873 sshd[2471]: Connection closed by 10.200.16.10 port 39032 Nov 5 15:06:32.040467 sshd-session[2468]: pam_unix(sshd:session): session closed for user core Nov 5 15:06:32.043949 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:39032.service: Deactivated successfully. Nov 5 15:06:32.045824 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:06:32.046420 systemd-logind[2029]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:06:32.047543 systemd-logind[2029]: Removed session 4. Nov 5 15:06:32.139113 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:39040.service - OpenSSH per-connection server daemon (10.200.16.10:39040). Nov 5 15:06:32.485317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 5 15:06:32.487463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:06:32.579036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:06:32.587697 (kubelet)[2488]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:06:32.633799 sshd[2477]: Accepted publickey for core from 10.200.16.10 port 39040 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:06:32.634957 sshd-session[2477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:06:32.639047 systemd-logind[2029]: New session 5 of user core. Nov 5 15:06:32.657498 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:06:32.690199 kubelet[2488]: E1105 15:06:32.690134 2488 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:06:32.692291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:06:32.692530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:06:32.692943 systemd[1]: kubelet.service: Consumed 105ms CPU time, 107M memory peak. Nov 5 15:06:32.979074 sshd[2494]: Connection closed by 10.200.16.10 port 39040 Nov 5 15:06:32.978370 sshd-session[2477]: pam_unix(sshd:session): session closed for user core Nov 5 15:06:32.981870 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:39040.service: Deactivated successfully. Nov 5 15:06:32.983306 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:06:32.983983 systemd-logind[2029]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:06:32.985159 systemd-logind[2029]: Removed session 5. Nov 5 15:06:33.052791 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:39048.service - OpenSSH per-connection server daemon (10.200.16.10:39048). Nov 5 15:06:33.506843 sshd[2501]: Accepted publickey for core from 10.200.16.10 port 39048 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:06:33.507884 sshd-session[2501]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:06:33.512915 systemd-logind[2029]: New session 6 of user core. Nov 5 15:06:33.518492 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:06:33.829506 sshd[2504]: Connection closed by 10.200.16.10 port 39048 Nov 5 15:06:33.829992 sshd-session[2501]: pam_unix(sshd:session): session closed for user core Nov 5 15:06:33.833908 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:39048.service: Deactivated successfully. Nov 5 15:06:33.835407 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:06:33.836026 systemd-logind[2029]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:06:33.837183 systemd-logind[2029]: Removed session 6. Nov 5 15:06:33.907826 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:39058.service - OpenSSH per-connection server daemon (10.200.16.10:39058). Nov 5 15:06:34.324657 sshd[2510]: Accepted publickey for core from 10.200.16.10 port 39058 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:06:34.325640 sshd-session[2510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:06:34.329404 systemd-logind[2029]: New session 7 of user core. Nov 5 15:06:34.335497 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:06:34.725462 sudo[2514]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:06:34.725686 sudo[2514]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:06:34.754640 sudo[2514]: pam_unix(sudo:session): session closed for user root Nov 5 15:06:34.820638 sshd[2513]: Connection closed by 10.200.16.10 port 39058 Nov 5 15:06:34.821217 sshd-session[2510]: pam_unix(sshd:session): session closed for user core Nov 5 15:06:34.825308 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:39058.service: Deactivated successfully. Nov 5 15:06:34.827214 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:06:34.828120 systemd-logind[2029]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:06:34.829594 systemd-logind[2029]: Removed session 7. Nov 5 15:06:34.902586 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:39066.service - OpenSSH per-connection server daemon (10.200.16.10:39066). Nov 5 15:06:35.357768 sshd[2520]: Accepted publickey for core from 10.200.16.10 port 39066 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:06:35.358813 sshd-session[2520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:06:35.363228 systemd-logind[2029]: New session 8 of user core. Nov 5 15:06:35.368494 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:06:35.613564 sudo[2525]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:06:35.613924 sudo[2525]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:06:35.620137 sudo[2525]: pam_unix(sudo:session): session closed for user root Nov 5 15:06:35.624342 sudo[2524]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:06:35.624664 sudo[2524]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:06:35.631745 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:06:35.660147 augenrules[2547]: No rules Nov 5 15:06:35.661315 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:06:35.663464 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:06:35.664812 sudo[2524]: pam_unix(sudo:session): session closed for user root Nov 5 15:06:35.736056 sshd[2523]: Connection closed by 10.200.16.10 port 39066 Nov 5 15:06:35.736049 sshd-session[2520]: pam_unix(sshd:session): session closed for user core Nov 5 15:06:35.739736 systemd-logind[2029]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:06:35.739955 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:39066.service: Deactivated successfully. Nov 5 15:06:35.741290 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:06:35.742807 systemd-logind[2029]: Removed session 8. Nov 5 15:06:35.819672 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:39082.service - OpenSSH per-connection server daemon (10.200.16.10:39082). Nov 5 15:06:36.276722 sshd[2556]: Accepted publickey for core from 10.200.16.10 port 39082 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:06:36.277744 sshd-session[2556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:06:36.282181 systemd-logind[2029]: New session 9 of user core. Nov 5 15:06:36.288649 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:06:36.530995 sudo[2560]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:06:36.531194 sudo[2560]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:06:38.125510 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Nov 5 15:06:38.258660 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:06:38.266576 (dockerd)[2578]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:06:39.334110 dockerd[2578]: time="2025-11-05T15:06:39.334057451Z" level=info msg="Starting up" Nov 5 15:06:39.334711 dockerd[2578]: time="2025-11-05T15:06:39.334679792Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:06:39.343333 dockerd[2578]: time="2025-11-05T15:06:39.343302619Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:06:39.394899 dockerd[2578]: time="2025-11-05T15:06:39.394860827Z" level=info msg="Loading containers: start." Nov 5 15:06:39.468384 kernel: Initializing XFRM netlink socket Nov 5 15:06:40.028606 systemd-networkd[1661]: docker0: Link UP Nov 5 15:06:40.185748 dockerd[2578]: time="2025-11-05T15:06:40.185697024Z" level=info msg="Loading containers: done." Nov 5 15:06:40.195245 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck930982814-merged.mount: Deactivated successfully. Nov 5 15:06:40.206337 dockerd[2578]: time="2025-11-05T15:06:40.206306214Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:06:40.206401 dockerd[2578]: time="2025-11-05T15:06:40.206384032Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:06:40.206475 dockerd[2578]: time="2025-11-05T15:06:40.206460018Z" level=info msg="Initializing buildkit" Nov 5 15:06:40.246974 dockerd[2578]: time="2025-11-05T15:06:40.246942187Z" level=info msg="Completed buildkit initialization" Nov 5 15:06:40.252232 dockerd[2578]: time="2025-11-05T15:06:40.252189657Z" level=info msg="Daemon has completed initialization" Nov 5 15:06:40.252555 dockerd[2578]: time="2025-11-05T15:06:40.252382637Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:06:40.252502 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:06:40.768559 containerd[2056]: time="2025-11-05T15:06:40.768487761Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 5 15:06:41.606883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591773370.mount: Deactivated successfully. Nov 5 15:06:42.735278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 5 15:06:42.737514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:06:42.825370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:06:42.834526 (kubelet)[2851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:06:42.859225 kubelet[2851]: E1105 15:06:42.859185 2851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:06:42.861191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:06:42.861403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:06:42.861853 systemd[1]: kubelet.service: Consumed 101ms CPU time, 106M memory peak. Nov 5 15:06:43.295190 containerd[2056]: time="2025-11-05T15:06:43.295134732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:43.298026 containerd[2056]: time="2025-11-05T15:06:43.297872005Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574510" Nov 5 15:06:43.300820 containerd[2056]: time="2025-11-05T15:06:43.300796348Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:43.304696 containerd[2056]: time="2025-11-05T15:06:43.304655864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:43.305295 containerd[2056]: time="2025-11-05T15:06:43.305162180Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 2.536207047s" Nov 5 15:06:43.305295 containerd[2056]: time="2025-11-05T15:06:43.305189501Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Nov 5 15:06:43.305656 containerd[2056]: time="2025-11-05T15:06:43.305632671Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 5 15:06:44.394972 containerd[2056]: time="2025-11-05T15:06:44.394915718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:44.397655 containerd[2056]: time="2025-11-05T15:06:44.397624567Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132143" Nov 5 15:06:44.400576 containerd[2056]: time="2025-11-05T15:06:44.400547205Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:44.404312 containerd[2056]: time="2025-11-05T15:06:44.404277646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:44.404785 containerd[2056]: time="2025-11-05T15:06:44.404746025Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 1.099087881s" Nov 5 15:06:44.404785 containerd[2056]: time="2025-11-05T15:06:44.404775970Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Nov 5 15:06:44.405269 containerd[2056]: time="2025-11-05T15:06:44.405245317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 5 15:06:45.351389 containerd[2056]: time="2025-11-05T15:06:45.350838707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:45.354373 containerd[2056]: time="2025-11-05T15:06:45.354346783Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191884" Nov 5 15:06:45.357493 containerd[2056]: time="2025-11-05T15:06:45.357474698Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:45.362511 containerd[2056]: time="2025-11-05T15:06:45.362478002Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 957.206396ms" Nov 5 15:06:45.362511 containerd[2056]: time="2025-11-05T15:06:45.362509379Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Nov 5 15:06:45.363077 containerd[2056]: time="2025-11-05T15:06:45.363057832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:45.363404 containerd[2056]: time="2025-11-05T15:06:45.363218188Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 5 15:06:45.641499 update_engine[2030]: I20251105 15:06:45.641391 2030 update_attempter.cc:509] Updating boot flags... Nov 5 15:06:46.408160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount18513471.mount: Deactivated successfully. Nov 5 15:06:46.586854 containerd[2056]: time="2025-11-05T15:06:46.586800185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:46.590159 containerd[2056]: time="2025-11-05T15:06:46.590133961Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789028" Nov 5 15:06:46.593097 containerd[2056]: time="2025-11-05T15:06:46.593071592Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:46.597027 containerd[2056]: time="2025-11-05T15:06:46.596997285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:46.597567 containerd[2056]: time="2025-11-05T15:06:46.597228963Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.233848868s" Nov 5 15:06:46.597567 containerd[2056]: time="2025-11-05T15:06:46.597251724Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Nov 5 15:06:46.597706 containerd[2056]: time="2025-11-05T15:06:46.597680862Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 5 15:06:47.519908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2762853397.mount: Deactivated successfully. Nov 5 15:06:48.589077 containerd[2056]: time="2025-11-05T15:06:48.589014919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:48.591977 containerd[2056]: time="2025-11-05T15:06:48.591951889Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Nov 5 15:06:48.595031 containerd[2056]: time="2025-11-05T15:06:48.594994679Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:48.600445 containerd[2056]: time="2025-11-05T15:06:48.599850623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:48.600445 containerd[2056]: time="2025-11-05T15:06:48.600338785Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 2.002632355s" Nov 5 15:06:48.600445 containerd[2056]: time="2025-11-05T15:06:48.600372826Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Nov 5 15:06:48.601140 containerd[2056]: time="2025-11-05T15:06:48.601117925Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 5 15:06:49.141636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114239000.mount: Deactivated successfully. Nov 5 15:06:49.160243 containerd[2056]: time="2025-11-05T15:06:49.160205441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:49.163587 containerd[2056]: time="2025-11-05T15:06:49.163561826Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Nov 5 15:06:49.170774 containerd[2056]: time="2025-11-05T15:06:49.170748830Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:49.174793 containerd[2056]: time="2025-11-05T15:06:49.174763960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:49.175318 containerd[2056]: time="2025-11-05T15:06:49.175018417Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 573.875451ms" Nov 5 15:06:49.175318 containerd[2056]: time="2025-11-05T15:06:49.175040922Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Nov 5 15:06:49.175564 containerd[2056]: time="2025-11-05T15:06:49.175546188Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 5 15:06:52.121472 containerd[2056]: time="2025-11-05T15:06:52.121412535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:52.126106 containerd[2056]: time="2025-11-05T15:06:52.126071112Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410766" Nov 5 15:06:52.143104 containerd[2056]: time="2025-11-05T15:06:52.143053831Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:52.148373 containerd[2056]: time="2025-11-05T15:06:52.148316573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:06:52.149535 containerd[2056]: time="2025-11-05T15:06:52.149507585Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.973886569s" Nov 5 15:06:52.149535 containerd[2056]: time="2025-11-05T15:06:52.149536522Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Nov 5 15:06:52.985507 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 5 15:06:52.988518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:06:53.096518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:06:53.103648 (kubelet)[3063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:06:53.132384 kubelet[3063]: E1105 15:06:53.131335 3063 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:06:53.134257 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:06:53.134354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:06:53.134897 systemd[1]: kubelet.service: Consumed 103ms CPU time, 106.4M memory peak. Nov 5 15:06:55.698880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:06:55.699180 systemd[1]: kubelet.service: Consumed 103ms CPU time, 106.4M memory peak. Nov 5 15:06:55.701160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:06:55.721449 systemd[1]: Reload requested from client PID 3077 ('systemctl') (unit session-9.scope)... Nov 5 15:06:55.721462 systemd[1]: Reloading... Nov 5 15:06:55.810393 zram_generator::config[3124]: No configuration found. Nov 5 15:06:55.963907 systemd[1]: Reloading finished in 242 ms. Nov 5 15:06:56.005701 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:06:56.005758 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:06:56.005940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:06:56.005974 systemd[1]: kubelet.service: Consumed 73ms CPU time, 95M memory peak. Nov 5 15:06:56.007079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:06:56.249516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:06:56.258623 (kubelet)[3191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:06:56.284131 kubelet[3191]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:06:56.284131 kubelet[3191]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:06:56.381415 kubelet[3191]: I1105 15:06:56.380460 3191 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:06:56.966166 kubelet[3191]: I1105 15:06:56.966026 3191 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 15:06:56.966166 kubelet[3191]: I1105 15:06:56.966055 3191 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:06:56.967339 kubelet[3191]: I1105 15:06:56.967269 3191 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 15:06:56.967339 kubelet[3191]: I1105 15:06:56.967292 3191 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:06:56.967890 kubelet[3191]: I1105 15:06:56.967734 3191 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:06:56.977144 kubelet[3191]: E1105 15:06:56.977113 3191 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:06:56.978143 kubelet[3191]: I1105 15:06:56.978013 3191 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:06:56.981076 kubelet[3191]: I1105 15:06:56.981038 3191 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:06:56.983648 kubelet[3191]: I1105 15:06:56.983578 3191 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 15:06:56.983879 kubelet[3191]: I1105 15:06:56.983859 3191 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:06:56.984046 kubelet[3191]: I1105 15:06:56.983930 3191 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.1-a-05c7a88322","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:06:56.984160 kubelet[3191]: I1105 15:06:56.984149 3191 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:06:56.984207 kubelet[3191]: I1105 15:06:56.984200 3191 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 15:06:56.984334 kubelet[3191]: I1105 15:06:56.984322 3191 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 15:06:56.989921 kubelet[3191]: I1105 15:06:56.989853 3191 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:06:56.991085 kubelet[3191]: I1105 15:06:56.991002 3191 kubelet.go:475] "Attempting to sync node with API server" Nov 5 15:06:56.991085 kubelet[3191]: I1105 15:06:56.991026 3191 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:06:56.991645 kubelet[3191]: E1105 15:06:56.991527 3191 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-a-05c7a88322&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:06:56.991739 kubelet[3191]: I1105 15:06:56.991726 3191 kubelet.go:387] "Adding apiserver pod source" Nov 5 15:06:56.992568 kubelet[3191]: I1105 15:06:56.992549 3191 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:06:56.993370 kubelet[3191]: I1105 15:06:56.993295 3191 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:06:56.993770 kubelet[3191]: I1105 15:06:56.993722 3191 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:06:56.993770 kubelet[3191]: I1105 15:06:56.993746 3191 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 15:06:56.993770 kubelet[3191]: W1105 15:06:56.993774 3191 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:06:56.994725 kubelet[3191]: E1105 15:06:56.994679 3191 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:06:56.995849 kubelet[3191]: I1105 15:06:56.995786 3191 server.go:1262] "Started kubelet" Nov 5 15:06:56.996745 kubelet[3191]: I1105 15:06:56.996717 3191 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:06:56.999382 kubelet[3191]: I1105 15:06:56.998976 3191 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:06:56.999382 kubelet[3191]: I1105 15:06:56.999046 3191 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 15:06:56.999382 kubelet[3191]: I1105 15:06:56.999315 3191 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:06:57.000300 kubelet[3191]: I1105 15:06:57.000286 3191 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:06:57.001985 kubelet[3191]: I1105 15:06:57.001962 3191 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:06:57.002155 kubelet[3191]: I1105 15:06:57.002145 3191 server.go:310] "Adding debug handlers to kubelet server" Nov 5 15:06:57.003609 kubelet[3191]: I1105 15:06:57.003587 3191 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 15:06:57.004079 kubelet[3191]: E1105 15:06:57.004046 3191 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4487.0.1-a-05c7a88322\" not found" Nov 5 15:06:57.004976 kubelet[3191]: I1105 15:06:57.004621 3191 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 15:06:57.004976 kubelet[3191]: I1105 15:06:57.004667 3191 reconciler.go:29] "Reconciler: start to sync state" Nov 5 15:06:57.007336 kubelet[3191]: E1105 15:06:57.006485 3191 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.1-a-05c7a88322.187524bb8516c72a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.1-a-05c7a88322,UID:ci-4487.0.1-a-05c7a88322,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.1-a-05c7a88322,},FirstTimestamp:2025-11-05 15:06:56.99576401 +0000 UTC m=+0.734688391,LastTimestamp:2025-11-05 15:06:56.99576401 +0000 UTC m=+0.734688391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.1-a-05c7a88322,}" Nov 5 15:06:57.007731 kubelet[3191]: I1105 15:06:57.007713 3191 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:06:57.008417 kubelet[3191]: E1105 15:06:57.008393 3191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-a-05c7a88322?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Nov 5 15:06:57.008600 kubelet[3191]: E1105 15:06:57.008586 3191 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:06:57.008813 kubelet[3191]: I1105 15:06:57.008792 3191 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:06:57.008882 kubelet[3191]: I1105 15:06:57.008868 3191 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:06:57.010837 kubelet[3191]: E1105 15:06:57.010816 3191 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:06:57.016844 kubelet[3191]: I1105 15:06:57.016816 3191 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:06:57.016844 kubelet[3191]: I1105 15:06:57.016827 3191 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:06:57.017277 kubelet[3191]: I1105 15:06:57.017088 3191 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:06:57.022094 kubelet[3191]: I1105 15:06:57.022075 3191 policy_none.go:49] "None policy: Start" Nov 5 15:06:57.022172 kubelet[3191]: I1105 15:06:57.022164 3191 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 15:06:57.022237 kubelet[3191]: I1105 15:06:57.022231 3191 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 15:06:57.026510 kubelet[3191]: I1105 15:06:57.026492 3191 policy_none.go:47] "Start" Nov 5 15:06:57.029930 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:06:57.038281 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:06:57.041301 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:06:57.050959 kubelet[3191]: E1105 15:06:57.050934 3191 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:06:57.051250 kubelet[3191]: I1105 15:06:57.051233 3191 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:06:57.051779 kubelet[3191]: I1105 15:06:57.051325 3191 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:06:57.051779 kubelet[3191]: I1105 15:06:57.051694 3191 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:06:57.054720 kubelet[3191]: E1105 15:06:57.054706 3191 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:06:57.054952 kubelet[3191]: E1105 15:06:57.054940 3191 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.1-a-05c7a88322\" not found" Nov 5 15:06:57.117176 kubelet[3191]: I1105 15:06:57.117144 3191 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 15:06:57.118136 kubelet[3191]: I1105 15:06:57.118119 3191 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 15:06:57.118228 kubelet[3191]: I1105 15:06:57.118219 3191 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 15:06:57.118299 kubelet[3191]: I1105 15:06:57.118290 3191 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 15:06:57.118646 kubelet[3191]: E1105 15:06:57.118417 3191 kubelet.go:2451] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Nov 5 15:06:57.119789 kubelet[3191]: E1105 15:06:57.119766 3191 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:06:57.156026 kubelet[3191]: I1105 15:06:57.155968 3191 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.156566 kubelet[3191]: E1105 15:06:57.156545 3191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.209216 kubelet[3191]: E1105 15:06:57.209172 3191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-a-05c7a88322?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Nov 5 15:06:57.231661 systemd[1]: Created slice kubepods-burstable-pod837c83f27dfa9ff989489c845854a1dd.slice - libcontainer container kubepods-burstable-pod837c83f27dfa9ff989489c845854a1dd.slice. Nov 5 15:06:57.238051 kubelet[3191]: E1105 15:06:57.238023 3191 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-05c7a88322\" not found" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.242839 systemd[1]: Created slice kubepods-burstable-pod928c24367b9b3a5ddae86347b223fd29.slice - libcontainer container kubepods-burstable-pod928c24367b9b3a5ddae86347b223fd29.slice. Nov 5 15:06:57.251378 kubelet[3191]: E1105 15:06:57.251317 3191 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-05c7a88322\" not found" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.253743 systemd[1]: Created slice kubepods-burstable-podab6bf2bdaec3f57cfbf8a812e5d13f12.slice - libcontainer container kubepods-burstable-podab6bf2bdaec3f57cfbf8a812e5d13f12.slice. Nov 5 15:06:57.255283 kubelet[3191]: E1105 15:06:57.255263 3191 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-05c7a88322\" not found" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.306662 kubelet[3191]: I1105 15:06:57.306622 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/837c83f27dfa9ff989489c845854a1dd-k8s-certs\") pod \"kube-apiserver-ci-4487.0.1-a-05c7a88322\" (UID: \"837c83f27dfa9ff989489c845854a1dd\") " pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.307090 kubelet[3191]: I1105 15:06:57.306705 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/928c24367b9b3a5ddae86347b223fd29-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.1-a-05c7a88322\" (UID: \"928c24367b9b3a5ddae86347b223fd29\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.307090 kubelet[3191]: I1105 15:06:57.306720 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/928c24367b9b3a5ddae86347b223fd29-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.1-a-05c7a88322\" (UID: \"928c24367b9b3a5ddae86347b223fd29\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.307090 kubelet[3191]: I1105 15:06:57.306731 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/928c24367b9b3a5ddae86347b223fd29-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.1-a-05c7a88322\" (UID: \"928c24367b9b3a5ddae86347b223fd29\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.307090 kubelet[3191]: I1105 15:06:57.306740 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab6bf2bdaec3f57cfbf8a812e5d13f12-kubeconfig\") pod \"kube-scheduler-ci-4487.0.1-a-05c7a88322\" (UID: \"ab6bf2bdaec3f57cfbf8a812e5d13f12\") " pod="kube-system/kube-scheduler-ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.307323 kubelet[3191]: I1105 15:06:57.306749 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/837c83f27dfa9ff989489c845854a1dd-ca-certs\") pod \"kube-apiserver-ci-4487.0.1-a-05c7a88322\" (UID: \"837c83f27dfa9ff989489c845854a1dd\") " pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.307323 kubelet[3191]: I1105 15:06:57.307251 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/837c83f27dfa9ff989489c845854a1dd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.1-a-05c7a88322\" (UID: \"837c83f27dfa9ff989489c845854a1dd\") " pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.307323 kubelet[3191]: I1105 15:06:57.307262 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/928c24367b9b3a5ddae86347b223fd29-ca-certs\") pod \"kube-controller-manager-ci-4487.0.1-a-05c7a88322\" (UID: \"928c24367b9b3a5ddae86347b223fd29\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.307323 kubelet[3191]: I1105 15:06:57.307301 3191 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/928c24367b9b3a5ddae86347b223fd29-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.1-a-05c7a88322\" (UID: \"928c24367b9b3a5ddae86347b223fd29\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.359101 kubelet[3191]: I1105 15:06:57.359069 3191 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.359610 kubelet[3191]: E1105 15:06:57.359588 3191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.610180 kubelet[3191]: E1105 15:06:57.610138 3191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-a-05c7a88322?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Nov 5 15:06:57.761194 kubelet[3191]: I1105 15:06:57.761151 3191 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.761663 kubelet[3191]: E1105 15:06:57.761640 3191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:06:57.902523 kubelet[3191]: E1105 15:06:57.902400 3191 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:06:58.488836 kubelet[3191]: E1105 15:06:58.365849 3191 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-a-05c7a88322&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:06:58.488836 kubelet[3191]: E1105 15:06:58.411332 3191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-a-05c7a88322?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="1.6s" Nov 5 15:06:58.537011 containerd[2056]: time="2025-11-05T15:06:58.536962318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.1-a-05c7a88322,Uid:837c83f27dfa9ff989489c845854a1dd,Namespace:kube-system,Attempt:0,}" Nov 5 15:06:58.545169 containerd[2056]: time="2025-11-05T15:06:58.545124609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.1-a-05c7a88322,Uid:928c24367b9b3a5ddae86347b223fd29,Namespace:kube-system,Attempt:0,}" Nov 5 15:06:58.551611 containerd[2056]: time="2025-11-05T15:06:58.551529019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.1-a-05c7a88322,Uid:ab6bf2bdaec3f57cfbf8a812e5d13f12,Namespace:kube-system,Attempt:0,}" Nov 5 15:06:58.563131 kubelet[3191]: I1105 15:06:58.562886 3191 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:06:58.563220 kubelet[3191]: E1105 15:06:58.563191 3191 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:06:58.580836 kubelet[3191]: E1105 15:06:58.580794 3191 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.20.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:06:58.604853 kubelet[3191]: E1105 15:06:58.604813 3191 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:06:59.062158 kubelet[3191]: E1105 15:06:59.062113 3191 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:06:59.241552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3721029835.mount: Deactivated successfully. Nov 5 15:06:59.261394 containerd[2056]: time="2025-11-05T15:06:59.260918028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:06:59.269642 containerd[2056]: time="2025-11-05T15:06:59.269610324Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Nov 5 15:06:59.275435 containerd[2056]: time="2025-11-05T15:06:59.275401039Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:06:59.281762 containerd[2056]: time="2025-11-05T15:06:59.281727022Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:06:59.284861 containerd[2056]: time="2025-11-05T15:06:59.284831672Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 15:06:59.287979 containerd[2056]: time="2025-11-05T15:06:59.287629980Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:06:59.291390 containerd[2056]: time="2025-11-05T15:06:59.291347508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:06:59.291787 containerd[2056]: time="2025-11-05T15:06:59.291764446Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 737.372863ms" Nov 5 15:06:59.294402 containerd[2056]: time="2025-11-05T15:06:59.294376365Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 15:06:59.295349 containerd[2056]: time="2025-11-05T15:06:59.295324644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 728.423729ms" Nov 5 15:06:59.295977 containerd[2056]: time="2025-11-05T15:06:59.295951931Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 750.139281ms" Nov 5 15:06:59.375522 containerd[2056]: time="2025-11-05T15:06:59.375349680Z" level=info msg="connecting to shim 35b05aca33cf05939b795b3349327d7718d390a2d760db1c170b39480ecd498d" address="unix:///run/containerd/s/897bd8a0aeb7138e2182099f40be69ff90d4f2897ec7a435e6498e886308a73e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:06:59.379092 containerd[2056]: time="2025-11-05T15:06:59.379049344Z" level=info msg="connecting to shim d9c331a9a0765cc5e7dd40f2c8209aa211c89dac1fa22361e31b2e6da060dc71" address="unix:///run/containerd/s/0f20a28762cf00faf520b3029c54cc4c447e7513a088e652982788cc9974f322" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:06:59.394498 systemd[1]: Started cri-containerd-35b05aca33cf05939b795b3349327d7718d390a2d760db1c170b39480ecd498d.scope - libcontainer container 35b05aca33cf05939b795b3349327d7718d390a2d760db1c170b39480ecd498d. Nov 5 15:06:59.401626 systemd[1]: Started cri-containerd-d9c331a9a0765cc5e7dd40f2c8209aa211c89dac1fa22361e31b2e6da060dc71.scope - libcontainer container d9c331a9a0765cc5e7dd40f2c8209aa211c89dac1fa22361e31b2e6da060dc71. Nov 5 15:06:59.434722 containerd[2056]: time="2025-11-05T15:06:59.434394733Z" level=info msg="connecting to shim 9b760132430bf47e8f0ad91a0998bdd53bd83065e2597f25cd847f0a11c4480a" address="unix:///run/containerd/s/e1851505dd0644e52b75dd3c571a0f1fb6463f06e093ff5d66eb8c2d9ae22e3a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:06:59.440687 containerd[2056]: time="2025-11-05T15:06:59.440660691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.1-a-05c7a88322,Uid:837c83f27dfa9ff989489c845854a1dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9c331a9a0765cc5e7dd40f2c8209aa211c89dac1fa22361e31b2e6da060dc71\"" Nov 5 15:06:59.446576 containerd[2056]: time="2025-11-05T15:06:59.446434062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.1-a-05c7a88322,Uid:928c24367b9b3a5ddae86347b223fd29,Namespace:kube-system,Attempt:0,} returns sandbox id \"35b05aca33cf05939b795b3349327d7718d390a2d760db1c170b39480ecd498d\"" Nov 5 15:06:59.449549 containerd[2056]: time="2025-11-05T15:06:59.449525800Z" level=info msg="CreateContainer within sandbox \"d9c331a9a0765cc5e7dd40f2c8209aa211c89dac1fa22361e31b2e6da060dc71\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:06:59.458508 systemd[1]: Started cri-containerd-9b760132430bf47e8f0ad91a0998bdd53bd83065e2597f25cd847f0a11c4480a.scope - libcontainer container 9b760132430bf47e8f0ad91a0998bdd53bd83065e2597f25cd847f0a11c4480a. Nov 5 15:06:59.463385 containerd[2056]: time="2025-11-05T15:06:59.462193751Z" level=info msg="CreateContainer within sandbox \"35b05aca33cf05939b795b3349327d7718d390a2d760db1c170b39480ecd498d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:06:59.489163 containerd[2056]: time="2025-11-05T15:06:59.488491949Z" level=info msg="Container ec557f66788424dd7fc977a84aa97429515a58dd2eace097e89156e0336e4dec: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:06:59.495173 containerd[2056]: time="2025-11-05T15:06:59.495136300Z" level=info msg="Container bb1de01a5c1a33141cbde55d540a4b02c47cf6173135f29c6c511675e08efc76: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:06:59.507591 containerd[2056]: time="2025-11-05T15:06:59.507560941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.1-a-05c7a88322,Uid:ab6bf2bdaec3f57cfbf8a812e5d13f12,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b760132430bf47e8f0ad91a0998bdd53bd83065e2597f25cd847f0a11c4480a\"" Nov 5 15:06:59.508294 containerd[2056]: time="2025-11-05T15:06:59.508266982Z" level=info msg="CreateContainer within sandbox \"d9c331a9a0765cc5e7dd40f2c8209aa211c89dac1fa22361e31b2e6da060dc71\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ec557f66788424dd7fc977a84aa97429515a58dd2eace097e89156e0336e4dec\"" Nov 5 15:06:59.509224 containerd[2056]: time="2025-11-05T15:06:59.509190076Z" level=info msg="StartContainer for \"ec557f66788424dd7fc977a84aa97429515a58dd2eace097e89156e0336e4dec\"" Nov 5 15:06:59.510776 containerd[2056]: time="2025-11-05T15:06:59.510750034Z" level=info msg="connecting to shim ec557f66788424dd7fc977a84aa97429515a58dd2eace097e89156e0336e4dec" address="unix:///run/containerd/s/0f20a28762cf00faf520b3029c54cc4c447e7513a088e652982788cc9974f322" protocol=ttrpc version=3 Nov 5 15:06:59.517172 containerd[2056]: time="2025-11-05T15:06:59.517139347Z" level=info msg="CreateContainer within sandbox \"9b760132430bf47e8f0ad91a0998bdd53bd83065e2597f25cd847f0a11c4480a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:06:59.525658 containerd[2056]: time="2025-11-05T15:06:59.525608541Z" level=info msg="CreateContainer within sandbox \"35b05aca33cf05939b795b3349327d7718d390a2d760db1c170b39480ecd498d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bb1de01a5c1a33141cbde55d540a4b02c47cf6173135f29c6c511675e08efc76\"" Nov 5 15:06:59.528496 systemd[1]: Started cri-containerd-ec557f66788424dd7fc977a84aa97429515a58dd2eace097e89156e0336e4dec.scope - libcontainer container ec557f66788424dd7fc977a84aa97429515a58dd2eace097e89156e0336e4dec. Nov 5 15:06:59.529684 containerd[2056]: time="2025-11-05T15:06:59.528602533Z" level=info msg="StartContainer for \"bb1de01a5c1a33141cbde55d540a4b02c47cf6173135f29c6c511675e08efc76\"" Nov 5 15:06:59.530970 containerd[2056]: time="2025-11-05T15:06:59.530942053Z" level=info msg="connecting to shim bb1de01a5c1a33141cbde55d540a4b02c47cf6173135f29c6c511675e08efc76" address="unix:///run/containerd/s/897bd8a0aeb7138e2182099f40be69ff90d4f2897ec7a435e6498e886308a73e" protocol=ttrpc version=3 Nov 5 15:06:59.547883 containerd[2056]: time="2025-11-05T15:06:59.547850106Z" level=info msg="Container ac14a454136cd3a7c6921a58ae03d3c6aa53a55f2a1f94508ee0007f0396bb07: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:06:59.551524 systemd[1]: Started cri-containerd-bb1de01a5c1a33141cbde55d540a4b02c47cf6173135f29c6c511675e08efc76.scope - libcontainer container bb1de01a5c1a33141cbde55d540a4b02c47cf6173135f29c6c511675e08efc76. Nov 5 15:06:59.563316 containerd[2056]: time="2025-11-05T15:06:59.563198233Z" level=info msg="CreateContainer within sandbox \"9b760132430bf47e8f0ad91a0998bdd53bd83065e2597f25cd847f0a11c4480a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ac14a454136cd3a7c6921a58ae03d3c6aa53a55f2a1f94508ee0007f0396bb07\"" Nov 5 15:06:59.564105 containerd[2056]: time="2025-11-05T15:06:59.564070574Z" level=info msg="StartContainer for \"ac14a454136cd3a7c6921a58ae03d3c6aa53a55f2a1f94508ee0007f0396bb07\"" Nov 5 15:06:59.565941 containerd[2056]: time="2025-11-05T15:06:59.565915226Z" level=info msg="connecting to shim ac14a454136cd3a7c6921a58ae03d3c6aa53a55f2a1f94508ee0007f0396bb07" address="unix:///run/containerd/s/e1851505dd0644e52b75dd3c571a0f1fb6463f06e093ff5d66eb8c2d9ae22e3a" protocol=ttrpc version=3 Nov 5 15:06:59.579167 containerd[2056]: time="2025-11-05T15:06:59.578889881Z" level=info msg="StartContainer for \"ec557f66788424dd7fc977a84aa97429515a58dd2eace097e89156e0336e4dec\" returns successfully" Nov 5 15:06:59.585000 kubelet[3191]: E1105 15:06:59.584954 3191 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:06:59.588519 systemd[1]: Started cri-containerd-ac14a454136cd3a7c6921a58ae03d3c6aa53a55f2a1f94508ee0007f0396bb07.scope - libcontainer container ac14a454136cd3a7c6921a58ae03d3c6aa53a55f2a1f94508ee0007f0396bb07. Nov 5 15:06:59.621285 containerd[2056]: time="2025-11-05T15:06:59.621245271Z" level=info msg="StartContainer for \"bb1de01a5c1a33141cbde55d540a4b02c47cf6173135f29c6c511675e08efc76\" returns successfully" Nov 5 15:06:59.659350 containerd[2056]: time="2025-11-05T15:06:59.659241877Z" level=info msg="StartContainer for \"ac14a454136cd3a7c6921a58ae03d3c6aa53a55f2a1f94508ee0007f0396bb07\" returns successfully" Nov 5 15:07:00.131296 kubelet[3191]: E1105 15:07:00.131262 3191 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-05c7a88322\" not found" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:00.135616 kubelet[3191]: E1105 15:07:00.135509 3191 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-05c7a88322\" not found" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:00.136761 kubelet[3191]: E1105 15:07:00.136742 3191 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-05c7a88322\" not found" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:00.165412 kubelet[3191]: I1105 15:07:00.165386 3191 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:01.143731 kubelet[3191]: E1105 15:07:01.143649 3191 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-05c7a88322\" not found" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:01.144331 kubelet[3191]: E1105 15:07:01.144305 3191 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-a-05c7a88322\" not found" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:01.183704 kubelet[3191]: E1105 15:07:01.183665 3191 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487.0.1-a-05c7a88322\" not found" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:01.264822 kubelet[3191]: I1105 15:07:01.264750 3191 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:01.264822 kubelet[3191]: E1105 15:07:01.264822 3191 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4487.0.1-a-05c7a88322\": node \"ci-4487.0.1-a-05c7a88322\" not found" Nov 5 15:07:01.305278 kubelet[3191]: I1105 15:07:01.305229 3191 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:01.364282 kubelet[3191]: E1105 15:07:01.363901 3191 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.1-a-05c7a88322\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:01.364282 kubelet[3191]: I1105 15:07:01.363926 3191 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:01.366541 kubelet[3191]: E1105 15:07:01.366466 3191 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.1-a-05c7a88322\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:01.366541 kubelet[3191]: I1105 15:07:01.366486 3191 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:01.368089 kubelet[3191]: E1105 15:07:01.368051 3191 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.1-a-05c7a88322\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:01.996102 kubelet[3191]: I1105 15:07:01.995846 3191 apiserver.go:52] "Watching apiserver" Nov 5 15:07:02.005087 kubelet[3191]: I1105 15:07:02.005065 3191 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 15:07:03.861049 kubelet[3191]: I1105 15:07:03.860884 3191 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:03.907120 kubelet[3191]: I1105 15:07:03.906969 3191 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:07:03.935436 kubelet[3191]: I1105 15:07:03.935405 3191 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:03.953165 kubelet[3191]: I1105 15:07:03.953127 3191 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:07:05.817905 systemd[1]: Reload requested from client PID 3477 ('systemctl') (unit session-9.scope)... Nov 5 15:07:05.817921 systemd[1]: Reloading... Nov 5 15:07:05.900390 zram_generator::config[3528]: No configuration found. Nov 5 15:07:05.903009 kubelet[3191]: I1105 15:07:05.902978 3191 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:05.911661 kubelet[3191]: I1105 15:07:05.911627 3191 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:07:06.080664 systemd[1]: Reloading finished in 262 ms. Nov 5 15:07:06.102107 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:07:06.109652 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:07:06.110044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:07:06.110100 systemd[1]: kubelet.service: Consumed 953ms CPU time, 121.7M memory peak. Nov 5 15:07:06.112568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:07:06.217279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:07:06.226091 (kubelet)[3589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:07:06.260012 kubelet[3589]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:07:06.260802 kubelet[3589]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:07:06.260802 kubelet[3589]: I1105 15:07:06.260415 3589 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:07:06.266380 kubelet[3589]: I1105 15:07:06.266342 3589 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 15:07:06.266460 kubelet[3589]: I1105 15:07:06.266451 3589 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:07:06.266535 kubelet[3589]: I1105 15:07:06.266527 3589 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 15:07:06.266586 kubelet[3589]: I1105 15:07:06.266575 3589 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:07:06.266805 kubelet[3589]: I1105 15:07:06.266787 3589 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:07:06.268016 kubelet[3589]: I1105 15:07:06.267701 3589 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 15:07:06.269903 kubelet[3589]: I1105 15:07:06.269881 3589 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:07:06.275236 kubelet[3589]: I1105 15:07:06.275192 3589 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:07:06.278322 kubelet[3589]: I1105 15:07:06.278258 3589 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 15:07:06.278575 kubelet[3589]: I1105 15:07:06.278553 3589 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:07:06.278753 kubelet[3589]: I1105 15:07:06.278630 3589 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.1-a-05c7a88322","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:07:06.278856 kubelet[3589]: I1105 15:07:06.278845 3589 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:07:06.278894 kubelet[3589]: I1105 15:07:06.278888 3589 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 15:07:06.278952 kubelet[3589]: I1105 15:07:06.278945 3589 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 15:07:06.279629 kubelet[3589]: I1105 15:07:06.279613 3589 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:07:06.279835 kubelet[3589]: I1105 15:07:06.279825 3589 kubelet.go:475] "Attempting to sync node with API server" Nov 5 15:07:06.279905 kubelet[3589]: I1105 15:07:06.279895 3589 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:07:06.279965 kubelet[3589]: I1105 15:07:06.279958 3589 kubelet.go:387] "Adding apiserver pod source" Nov 5 15:07:06.280014 kubelet[3589]: I1105 15:07:06.280006 3589 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:07:06.283685 kubelet[3589]: I1105 15:07:06.283623 3589 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:07:06.284115 kubelet[3589]: I1105 15:07:06.284103 3589 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:07:06.284301 kubelet[3589]: I1105 15:07:06.284206 3589 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 15:07:06.286501 kubelet[3589]: I1105 15:07:06.286487 3589 server.go:1262] "Started kubelet" Nov 5 15:07:06.287101 kubelet[3589]: I1105 15:07:06.287062 3589 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:07:06.287757 kubelet[3589]: I1105 15:07:06.287712 3589 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:07:06.287998 kubelet[3589]: I1105 15:07:06.287976 3589 server.go:310] "Adding debug handlers to kubelet server" Nov 5 15:07:06.291294 kubelet[3589]: I1105 15:07:06.291248 3589 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:07:06.291431 kubelet[3589]: I1105 15:07:06.291419 3589 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 15:07:06.291688 kubelet[3589]: I1105 15:07:06.291646 3589 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:07:06.292976 kubelet[3589]: I1105 15:07:06.292958 3589 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:07:06.300190 kubelet[3589]: I1105 15:07:06.300157 3589 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 15:07:06.300377 kubelet[3589]: I1105 15:07:06.300305 3589 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 15:07:06.300563 kubelet[3589]: I1105 15:07:06.300552 3589 reconciler.go:29] "Reconciler: start to sync state" Nov 5 15:07:06.300873 kubelet[3589]: E1105 15:07:06.300735 3589 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:07:06.301282 kubelet[3589]: I1105 15:07:06.301266 3589 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:07:06.301487 kubelet[3589]: I1105 15:07:06.301470 3589 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:07:06.302720 kubelet[3589]: I1105 15:07:06.302622 3589 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:07:06.316721 kubelet[3589]: I1105 15:07:06.316691 3589 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 15:07:06.320930 kubelet[3589]: I1105 15:07:06.320841 3589 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 15:07:06.320930 kubelet[3589]: I1105 15:07:06.320859 3589 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 15:07:06.320930 kubelet[3589]: I1105 15:07:06.320878 3589 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 15:07:06.320930 kubelet[3589]: E1105 15:07:06.320911 3589 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:07:06.349686 kubelet[3589]: I1105 15:07:06.349468 3589 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:07:06.349686 kubelet[3589]: I1105 15:07:06.349478 3589 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:07:06.349686 kubelet[3589]: I1105 15:07:06.349494 3589 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:07:06.349686 kubelet[3589]: I1105 15:07:06.349577 3589 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:07:06.349686 kubelet[3589]: I1105 15:07:06.349584 3589 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:07:06.349686 kubelet[3589]: I1105 15:07:06.349595 3589 policy_none.go:49] "None policy: Start" Nov 5 15:07:06.349686 kubelet[3589]: I1105 15:07:06.349600 3589 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 15:07:06.349686 kubelet[3589]: I1105 15:07:06.349614 3589 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 15:07:06.349686 kubelet[3589]: I1105 15:07:06.349674 3589 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 5 15:07:06.349686 kubelet[3589]: I1105 15:07:06.349679 3589 policy_none.go:47] "Start" Nov 5 15:07:06.354014 kubelet[3589]: E1105 15:07:06.353992 3589 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:07:06.354155 kubelet[3589]: I1105 15:07:06.354143 3589 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:07:06.354195 kubelet[3589]: I1105 15:07:06.354154 3589 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:07:06.355584 kubelet[3589]: I1105 15:07:06.355468 3589 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:07:06.360342 kubelet[3589]: E1105 15:07:06.359653 3589 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:07:06.422424 kubelet[3589]: I1105 15:07:06.422382 3589 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.422633 kubelet[3589]: I1105 15:07:06.422392 3589 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.422753 kubelet[3589]: I1105 15:07:06.422733 3589 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.430763 kubelet[3589]: I1105 15:07:06.430540 3589 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:07:06.431031 kubelet[3589]: E1105 15:07:06.430988 3589 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.1-a-05c7a88322\" already exists" pod="kube-system/kube-scheduler-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.431286 kubelet[3589]: I1105 15:07:06.431273 3589 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:07:06.431477 kubelet[3589]: E1105 15:07:06.431409 3589 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.1-a-05c7a88322\" already exists" pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.431625 kubelet[3589]: I1105 15:07:06.431567 3589 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:07:06.431725 kubelet[3589]: E1105 15:07:06.431714 3589 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.1-a-05c7a88322\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.459967 kubelet[3589]: I1105 15:07:06.459923 3589 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.468408 kubelet[3589]: I1105 15:07:06.468387 3589 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.468483 kubelet[3589]: I1105 15:07:06.468443 3589 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.601660 kubelet[3589]: I1105 15:07:06.601563 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/928c24367b9b3a5ddae86347b223fd29-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.1-a-05c7a88322\" (UID: \"928c24367b9b3a5ddae86347b223fd29\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.601660 kubelet[3589]: I1105 15:07:06.601593 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab6bf2bdaec3f57cfbf8a812e5d13f12-kubeconfig\") pod \"kube-scheduler-ci-4487.0.1-a-05c7a88322\" (UID: \"ab6bf2bdaec3f57cfbf8a812e5d13f12\") " pod="kube-system/kube-scheduler-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.601660 kubelet[3589]: I1105 15:07:06.601608 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/928c24367b9b3a5ddae86347b223fd29-ca-certs\") pod \"kube-controller-manager-ci-4487.0.1-a-05c7a88322\" (UID: \"928c24367b9b3a5ddae86347b223fd29\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.601660 kubelet[3589]: I1105 15:07:06.601618 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/928c24367b9b3a5ddae86347b223fd29-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.1-a-05c7a88322\" (UID: \"928c24367b9b3a5ddae86347b223fd29\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.601660 kubelet[3589]: I1105 15:07:06.601630 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/928c24367b9b3a5ddae86347b223fd29-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.1-a-05c7a88322\" (UID: \"928c24367b9b3a5ddae86347b223fd29\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.602086 kubelet[3589]: I1105 15:07:06.601640 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/837c83f27dfa9ff989489c845854a1dd-ca-certs\") pod \"kube-apiserver-ci-4487.0.1-a-05c7a88322\" (UID: \"837c83f27dfa9ff989489c845854a1dd\") " pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.602086 kubelet[3589]: I1105 15:07:06.601655 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/837c83f27dfa9ff989489c845854a1dd-k8s-certs\") pod \"kube-apiserver-ci-4487.0.1-a-05c7a88322\" (UID: \"837c83f27dfa9ff989489c845854a1dd\") " pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.602086 kubelet[3589]: I1105 15:07:06.601663 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/837c83f27dfa9ff989489c845854a1dd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.1-a-05c7a88322\" (UID: \"837c83f27dfa9ff989489c845854a1dd\") " pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:06.602086 kubelet[3589]: I1105 15:07:06.601678 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/928c24367b9b3a5ddae86347b223fd29-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.1-a-05c7a88322\" (UID: \"928c24367b9b3a5ddae86347b223fd29\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:07.283260 kubelet[3589]: I1105 15:07:07.283221 3589 apiserver.go:52] "Watching apiserver" Nov 5 15:07:07.300535 kubelet[3589]: I1105 15:07:07.300502 3589 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 15:07:07.346568 kubelet[3589]: I1105 15:07:07.346542 3589 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:07.353374 kubelet[3589]: I1105 15:07:07.353320 3589 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:07:07.353722 kubelet[3589]: E1105 15:07:07.353557 3589 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.1-a-05c7a88322\" already exists" pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" Nov 5 15:07:07.374649 kubelet[3589]: I1105 15:07:07.374603 3589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.1-a-05c7a88322" podStartSLOduration=4.374585735 podStartE2EDuration="4.374585735s" podCreationTimestamp="2025-11-05 15:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:07:07.365701405 +0000 UTC m=+1.135648658" watchObservedRunningTime="2025-11-05 15:07:07.374585735 +0000 UTC m=+1.144532980" Nov 5 15:07:07.384559 kubelet[3589]: I1105 15:07:07.384453 3589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.1-a-05c7a88322" podStartSLOduration=2.384444993 podStartE2EDuration="2.384444993s" podCreationTimestamp="2025-11-05 15:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:07:07.375450668 +0000 UTC m=+1.145397913" watchObservedRunningTime="2025-11-05 15:07:07.384444993 +0000 UTC m=+1.154392238" Nov 5 15:07:07.392905 kubelet[3589]: I1105 15:07:07.392867 3589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.1-a-05c7a88322" podStartSLOduration=4.392857735 podStartE2EDuration="4.392857735s" podCreationTimestamp="2025-11-05 15:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:07:07.38465227 +0000 UTC m=+1.154599539" watchObservedRunningTime="2025-11-05 15:07:07.392857735 +0000 UTC m=+1.162804980" Nov 5 15:07:10.995940 kubelet[3589]: I1105 15:07:10.995629 3589 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:07:10.996518 containerd[2056]: time="2025-11-05T15:07:10.995924478Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:07:10.996874 kubelet[3589]: I1105 15:07:10.996630 3589 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:07:11.789133 systemd[1]: Created slice kubepods-besteffort-podacd30655_e0a8_4da7_ad0a_67d15860f8d4.slice - libcontainer container kubepods-besteffort-podacd30655_e0a8_4da7_ad0a_67d15860f8d4.slice. Nov 5 15:07:11.830981 kubelet[3589]: I1105 15:07:11.830952 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/acd30655-e0a8-4da7-ad0a-67d15860f8d4-kube-proxy\") pod \"kube-proxy-wj4bk\" (UID: \"acd30655-e0a8-4da7-ad0a-67d15860f8d4\") " pod="kube-system/kube-proxy-wj4bk" Nov 5 15:07:11.830981 kubelet[3589]: I1105 15:07:11.830977 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acd30655-e0a8-4da7-ad0a-67d15860f8d4-xtables-lock\") pod \"kube-proxy-wj4bk\" (UID: \"acd30655-e0a8-4da7-ad0a-67d15860f8d4\") " pod="kube-system/kube-proxy-wj4bk" Nov 5 15:07:11.830981 kubelet[3589]: I1105 15:07:11.830990 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acd30655-e0a8-4da7-ad0a-67d15860f8d4-lib-modules\") pod \"kube-proxy-wj4bk\" (UID: \"acd30655-e0a8-4da7-ad0a-67d15860f8d4\") " pod="kube-system/kube-proxy-wj4bk" Nov 5 15:07:11.831142 kubelet[3589]: I1105 15:07:11.831011 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f6k4\" (UniqueName: \"kubernetes.io/projected/acd30655-e0a8-4da7-ad0a-67d15860f8d4-kube-api-access-4f6k4\") pod \"kube-proxy-wj4bk\" (UID: \"acd30655-e0a8-4da7-ad0a-67d15860f8d4\") " pod="kube-system/kube-proxy-wj4bk" Nov 5 15:07:12.104817 containerd[2056]: time="2025-11-05T15:07:12.104640839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj4bk,Uid:acd30655-e0a8-4da7-ad0a-67d15860f8d4,Namespace:kube-system,Attempt:0,}" Nov 5 15:07:12.145452 containerd[2056]: time="2025-11-05T15:07:12.145332052Z" level=info msg="connecting to shim 4f04acd3a6ee96a120d0abe088c14f7bd403fb70c6df37bd1ef553563ab799f7" address="unix:///run/containerd/s/e86609e17e760697c4cc906f7a16c9c8b0e046e83a30c7d2368f8540b971eadf" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:07:12.167584 systemd[1]: Started cri-containerd-4f04acd3a6ee96a120d0abe088c14f7bd403fb70c6df37bd1ef553563ab799f7.scope - libcontainer container 4f04acd3a6ee96a120d0abe088c14f7bd403fb70c6df37bd1ef553563ab799f7. Nov 5 15:07:12.207679 containerd[2056]: time="2025-11-05T15:07:12.207642029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj4bk,Uid:acd30655-e0a8-4da7-ad0a-67d15860f8d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f04acd3a6ee96a120d0abe088c14f7bd403fb70c6df37bd1ef553563ab799f7\"" Nov 5 15:07:12.211518 systemd[1]: Created slice kubepods-besteffort-pod4843eb85_e9cd_40d1_8319_13b9e06ba34e.slice - libcontainer container kubepods-besteffort-pod4843eb85_e9cd_40d1_8319_13b9e06ba34e.slice. Nov 5 15:07:12.218933 containerd[2056]: time="2025-11-05T15:07:12.218882708Z" level=info msg="CreateContainer within sandbox \"4f04acd3a6ee96a120d0abe088c14f7bd403fb70c6df37bd1ef553563ab799f7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:07:12.234068 kubelet[3589]: I1105 15:07:12.234046 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4843eb85-e9cd-40d1-8319-13b9e06ba34e-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-7ztmf\" (UID: \"4843eb85-e9cd-40d1-8319-13b9e06ba34e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-7ztmf" Nov 5 15:07:12.235410 kubelet[3589]: I1105 15:07:12.234288 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxwhk\" (UniqueName: \"kubernetes.io/projected/4843eb85-e9cd-40d1-8319-13b9e06ba34e-kube-api-access-mxwhk\") pod \"tigera-operator-65cdcdfd6d-7ztmf\" (UID: \"4843eb85-e9cd-40d1-8319-13b9e06ba34e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-7ztmf" Nov 5 15:07:12.239374 containerd[2056]: time="2025-11-05T15:07:12.239105395Z" level=info msg="Container a4fce5b3fb76415cc70538d4080be8f6a573dd3498115806c93e8e1ee147d8fc: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:07:12.256821 containerd[2056]: time="2025-11-05T15:07:12.256780018Z" level=info msg="CreateContainer within sandbox \"4f04acd3a6ee96a120d0abe088c14f7bd403fb70c6df37bd1ef553563ab799f7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a4fce5b3fb76415cc70538d4080be8f6a573dd3498115806c93e8e1ee147d8fc\"" Nov 5 15:07:12.257263 containerd[2056]: time="2025-11-05T15:07:12.257245769Z" level=info msg="StartContainer for \"a4fce5b3fb76415cc70538d4080be8f6a573dd3498115806c93e8e1ee147d8fc\"" Nov 5 15:07:12.258353 containerd[2056]: time="2025-11-05T15:07:12.258307666Z" level=info msg="connecting to shim a4fce5b3fb76415cc70538d4080be8f6a573dd3498115806c93e8e1ee147d8fc" address="unix:///run/containerd/s/e86609e17e760697c4cc906f7a16c9c8b0e046e83a30c7d2368f8540b971eadf" protocol=ttrpc version=3 Nov 5 15:07:12.275472 systemd[1]: Started cri-containerd-a4fce5b3fb76415cc70538d4080be8f6a573dd3498115806c93e8e1ee147d8fc.scope - libcontainer container a4fce5b3fb76415cc70538d4080be8f6a573dd3498115806c93e8e1ee147d8fc. Nov 5 15:07:12.305192 containerd[2056]: time="2025-11-05T15:07:12.305164208Z" level=info msg="StartContainer for \"a4fce5b3fb76415cc70538d4080be8f6a573dd3498115806c93e8e1ee147d8fc\" returns successfully" Nov 5 15:07:12.519885 containerd[2056]: time="2025-11-05T15:07:12.519809802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-7ztmf,Uid:4843eb85-e9cd-40d1-8319-13b9e06ba34e,Namespace:tigera-operator,Attempt:0,}" Nov 5 15:07:12.562533 containerd[2056]: time="2025-11-05T15:07:12.562495559Z" level=info msg="connecting to shim c8ef54f7f04499e3fd3c392f2e8cb13efffc651a7e048011f31c091c2c3c5a55" address="unix:///run/containerd/s/8c3ef9c894eaced54523d40e3dca53fcd0ad7e2bacdcb3376cb12a4cf9c76328" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:07:12.582482 systemd[1]: Started cri-containerd-c8ef54f7f04499e3fd3c392f2e8cb13efffc651a7e048011f31c091c2c3c5a55.scope - libcontainer container c8ef54f7f04499e3fd3c392f2e8cb13efffc651a7e048011f31c091c2c3c5a55. Nov 5 15:07:12.608865 containerd[2056]: time="2025-11-05T15:07:12.608826180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-7ztmf,Uid:4843eb85-e9cd-40d1-8319-13b9e06ba34e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c8ef54f7f04499e3fd3c392f2e8cb13efffc651a7e048011f31c091c2c3c5a55\"" Nov 5 15:07:12.610424 containerd[2056]: time="2025-11-05T15:07:12.610400790Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 15:07:14.176238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2145747183.mount: Deactivated successfully. Nov 5 15:07:14.672338 containerd[2056]: time="2025-11-05T15:07:14.672284317Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:14.675708 containerd[2056]: time="2025-11-05T15:07:14.675681423Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 5 15:07:14.682010 containerd[2056]: time="2025-11-05T15:07:14.681965988Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:14.686170 containerd[2056]: time="2025-11-05T15:07:14.686132494Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:14.686699 containerd[2056]: time="2025-11-05T15:07:14.686430799Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.076006009s" Nov 5 15:07:14.686699 containerd[2056]: time="2025-11-05T15:07:14.686457816Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 5 15:07:14.694774 containerd[2056]: time="2025-11-05T15:07:14.694745250Z" level=info msg="CreateContainer within sandbox \"c8ef54f7f04499e3fd3c392f2e8cb13efffc651a7e048011f31c091c2c3c5a55\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 15:07:14.714077 containerd[2056]: time="2025-11-05T15:07:14.713709738Z" level=info msg="Container b232f1bdfe03db2cfb89b0afea7c4f88e8a7ad359af386896946012670aa58ba: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:07:14.715601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2211686740.mount: Deactivated successfully. Nov 5 15:07:14.727268 containerd[2056]: time="2025-11-05T15:07:14.727237232Z" level=info msg="CreateContainer within sandbox \"c8ef54f7f04499e3fd3c392f2e8cb13efffc651a7e048011f31c091c2c3c5a55\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b232f1bdfe03db2cfb89b0afea7c4f88e8a7ad359af386896946012670aa58ba\"" Nov 5 15:07:14.728547 containerd[2056]: time="2025-11-05T15:07:14.728514968Z" level=info msg="StartContainer for \"b232f1bdfe03db2cfb89b0afea7c4f88e8a7ad359af386896946012670aa58ba\"" Nov 5 15:07:14.730227 containerd[2056]: time="2025-11-05T15:07:14.730186724Z" level=info msg="connecting to shim b232f1bdfe03db2cfb89b0afea7c4f88e8a7ad359af386896946012670aa58ba" address="unix:///run/containerd/s/8c3ef9c894eaced54523d40e3dca53fcd0ad7e2bacdcb3376cb12a4cf9c76328" protocol=ttrpc version=3 Nov 5 15:07:14.744490 systemd[1]: Started cri-containerd-b232f1bdfe03db2cfb89b0afea7c4f88e8a7ad359af386896946012670aa58ba.scope - libcontainer container b232f1bdfe03db2cfb89b0afea7c4f88e8a7ad359af386896946012670aa58ba. Nov 5 15:07:14.769547 containerd[2056]: time="2025-11-05T15:07:14.769508527Z" level=info msg="StartContainer for \"b232f1bdfe03db2cfb89b0afea7c4f88e8a7ad359af386896946012670aa58ba\" returns successfully" Nov 5 15:07:15.268204 kubelet[3589]: I1105 15:07:15.268000 3589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wj4bk" podStartSLOduration=4.267987499 podStartE2EDuration="4.267987499s" podCreationTimestamp="2025-11-05 15:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:07:12.368713543 +0000 UTC m=+6.138660788" watchObservedRunningTime="2025-11-05 15:07:15.267987499 +0000 UTC m=+9.037934744" Nov 5 15:07:15.375577 kubelet[3589]: I1105 15:07:15.375501 3589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-7ztmf" podStartSLOduration=1.298332106 podStartE2EDuration="3.375473134s" podCreationTimestamp="2025-11-05 15:07:12 +0000 UTC" firstStartedPulling="2025-11-05 15:07:12.609973704 +0000 UTC m=+6.379920957" lastFinishedPulling="2025-11-05 15:07:14.68711474 +0000 UTC m=+8.457061985" observedRunningTime="2025-11-05 15:07:15.374848522 +0000 UTC m=+9.144795815" watchObservedRunningTime="2025-11-05 15:07:15.375473134 +0000 UTC m=+9.145420379" Nov 5 15:07:19.851754 sudo[2560]: pam_unix(sudo:session): session closed for user root Nov 5 15:07:19.924377 sshd[2559]: Connection closed by 10.200.16.10 port 39082 Nov 5 15:07:19.925574 sshd-session[2556]: pam_unix(sshd:session): session closed for user core Nov 5 15:07:19.929766 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:39082.service: Deactivated successfully. Nov 5 15:07:19.933392 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:07:19.933657 systemd[1]: session-9.scope: Consumed 4.387s CPU time, 222M memory peak. Nov 5 15:07:19.935513 systemd-logind[2029]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:07:19.936696 systemd-logind[2029]: Removed session 9. Nov 5 15:07:26.599335 kubelet[3589]: E1105 15:07:26.599281 3589 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4487.0.1-a-05c7a88322\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4487.0.1-a-05c7a88322' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"typha-certs\"" type="*v1.Secret" Nov 5 15:07:26.604672 systemd[1]: Created slice kubepods-besteffort-pod9a1934ee_c6a7_4c17_b8b2_8d0b2296aa78.slice - libcontainer container kubepods-besteffort-pod9a1934ee_c6a7_4c17_b8b2_8d0b2296aa78.slice. Nov 5 15:07:26.627315 kubelet[3589]: I1105 15:07:26.627262 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a1934ee-c6a7-4c17-b8b2-8d0b2296aa78-tigera-ca-bundle\") pod \"calico-typha-7dc759f956-khj5v\" (UID: \"9a1934ee-c6a7-4c17-b8b2-8d0b2296aa78\") " pod="calico-system/calico-typha-7dc759f956-khj5v" Nov 5 15:07:26.627315 kubelet[3589]: I1105 15:07:26.627293 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp7xj\" (UniqueName: \"kubernetes.io/projected/9a1934ee-c6a7-4c17-b8b2-8d0b2296aa78-kube-api-access-kp7xj\") pod \"calico-typha-7dc759f956-khj5v\" (UID: \"9a1934ee-c6a7-4c17-b8b2-8d0b2296aa78\") " pod="calico-system/calico-typha-7dc759f956-khj5v" Nov 5 15:07:26.627662 kubelet[3589]: I1105 15:07:26.627392 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9a1934ee-c6a7-4c17-b8b2-8d0b2296aa78-typha-certs\") pod \"calico-typha-7dc759f956-khj5v\" (UID: \"9a1934ee-c6a7-4c17-b8b2-8d0b2296aa78\") " pod="calico-system/calico-typha-7dc759f956-khj5v" Nov 5 15:07:26.725616 systemd[1]: Created slice kubepods-besteffort-pod4f00e1f1_d81e_417e_b7a7_8dc3256d1d09.slice - libcontainer container kubepods-besteffort-pod4f00e1f1_d81e_417e_b7a7_8dc3256d1d09.slice. Nov 5 15:07:26.829123 kubelet[3589]: I1105 15:07:26.828885 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4f00e1f1-d81e-417e-b7a7-8dc3256d1d09-cni-bin-dir\") pod \"calico-node-sq28p\" (UID: \"4f00e1f1-d81e-417e-b7a7-8dc3256d1d09\") " pod="calico-system/calico-node-sq28p" Nov 5 15:07:26.829123 kubelet[3589]: I1105 15:07:26.828931 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2r2t\" (UniqueName: \"kubernetes.io/projected/4f00e1f1-d81e-417e-b7a7-8dc3256d1d09-kube-api-access-m2r2t\") pod \"calico-node-sq28p\" (UID: \"4f00e1f1-d81e-417e-b7a7-8dc3256d1d09\") " pod="calico-system/calico-node-sq28p" Nov 5 15:07:26.829123 kubelet[3589]: I1105 15:07:26.828946 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f00e1f1-d81e-417e-b7a7-8dc3256d1d09-lib-modules\") pod \"calico-node-sq28p\" (UID: \"4f00e1f1-d81e-417e-b7a7-8dc3256d1d09\") " pod="calico-system/calico-node-sq28p" Nov 5 15:07:26.829123 kubelet[3589]: I1105 15:07:26.828956 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4f00e1f1-d81e-417e-b7a7-8dc3256d1d09-node-certs\") pod \"calico-node-sq28p\" (UID: \"4f00e1f1-d81e-417e-b7a7-8dc3256d1d09\") " pod="calico-system/calico-node-sq28p" Nov 5 15:07:26.829123 kubelet[3589]: I1105 15:07:26.828965 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4f00e1f1-d81e-417e-b7a7-8dc3256d1d09-policysync\") pod \"calico-node-sq28p\" (UID: \"4f00e1f1-d81e-417e-b7a7-8dc3256d1d09\") " pod="calico-system/calico-node-sq28p" Nov 5 15:07:26.829385 kubelet[3589]: I1105 15:07:26.828972 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f00e1f1-d81e-417e-b7a7-8dc3256d1d09-tigera-ca-bundle\") pod \"calico-node-sq28p\" (UID: \"4f00e1f1-d81e-417e-b7a7-8dc3256d1d09\") " pod="calico-system/calico-node-sq28p" Nov 5 15:07:26.829385 kubelet[3589]: I1105 15:07:26.828981 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4f00e1f1-d81e-417e-b7a7-8dc3256d1d09-var-lib-calico\") pod \"calico-node-sq28p\" (UID: \"4f00e1f1-d81e-417e-b7a7-8dc3256d1d09\") " pod="calico-system/calico-node-sq28p" Nov 5 15:07:26.829385 kubelet[3589]: I1105 15:07:26.828990 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4f00e1f1-d81e-417e-b7a7-8dc3256d1d09-flexvol-driver-host\") pod \"calico-node-sq28p\" (UID: \"4f00e1f1-d81e-417e-b7a7-8dc3256d1d09\") " pod="calico-system/calico-node-sq28p" Nov 5 15:07:26.829385 kubelet[3589]: I1105 15:07:26.828998 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f00e1f1-d81e-417e-b7a7-8dc3256d1d09-xtables-lock\") pod \"calico-node-sq28p\" (UID: \"4f00e1f1-d81e-417e-b7a7-8dc3256d1d09\") " pod="calico-system/calico-node-sq28p" Nov 5 15:07:26.829385 kubelet[3589]: I1105 15:07:26.829008 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4f00e1f1-d81e-417e-b7a7-8dc3256d1d09-cni-log-dir\") pod \"calico-node-sq28p\" (UID: \"4f00e1f1-d81e-417e-b7a7-8dc3256d1d09\") " pod="calico-system/calico-node-sq28p" Nov 5 15:07:26.829466 kubelet[3589]: I1105 15:07:26.829017 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4f00e1f1-d81e-417e-b7a7-8dc3256d1d09-var-run-calico\") pod \"calico-node-sq28p\" (UID: \"4f00e1f1-d81e-417e-b7a7-8dc3256d1d09\") " pod="calico-system/calico-node-sq28p" Nov 5 15:07:26.829466 kubelet[3589]: I1105 15:07:26.829026 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4f00e1f1-d81e-417e-b7a7-8dc3256d1d09-cni-net-dir\") pod \"calico-node-sq28p\" (UID: \"4f00e1f1-d81e-417e-b7a7-8dc3256d1d09\") " pod="calico-system/calico-node-sq28p" Nov 5 15:07:26.912079 kubelet[3589]: E1105 15:07:26.911268 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:07:26.930195 kubelet[3589]: I1105 15:07:26.929631 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5175072d-c97d-4e97-bbe9-4eb6c98f1e6a-registration-dir\") pod \"csi-node-driver-nr498\" (UID: \"5175072d-c97d-4e97-bbe9-4eb6c98f1e6a\") " pod="calico-system/csi-node-driver-nr498" Nov 5 15:07:26.930195 kubelet[3589]: I1105 15:07:26.929662 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5175072d-c97d-4e97-bbe9-4eb6c98f1e6a-varrun\") pod \"csi-node-driver-nr498\" (UID: \"5175072d-c97d-4e97-bbe9-4eb6c98f1e6a\") " pod="calico-system/csi-node-driver-nr498" Nov 5 15:07:26.930195 kubelet[3589]: I1105 15:07:26.929683 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7zvd\" (UniqueName: \"kubernetes.io/projected/5175072d-c97d-4e97-bbe9-4eb6c98f1e6a-kube-api-access-b7zvd\") pod \"csi-node-driver-nr498\" (UID: \"5175072d-c97d-4e97-bbe9-4eb6c98f1e6a\") " pod="calico-system/csi-node-driver-nr498" Nov 5 15:07:26.930195 kubelet[3589]: I1105 15:07:26.929727 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5175072d-c97d-4e97-bbe9-4eb6c98f1e6a-kubelet-dir\") pod \"csi-node-driver-nr498\" (UID: \"5175072d-c97d-4e97-bbe9-4eb6c98f1e6a\") " pod="calico-system/csi-node-driver-nr498" Nov 5 15:07:26.930195 kubelet[3589]: I1105 15:07:26.929743 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5175072d-c97d-4e97-bbe9-4eb6c98f1e6a-socket-dir\") pod \"csi-node-driver-nr498\" (UID: \"5175072d-c97d-4e97-bbe9-4eb6c98f1e6a\") " pod="calico-system/csi-node-driver-nr498" Nov 5 15:07:26.932885 kubelet[3589]: E1105 15:07:26.932834 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:26.933068 kubelet[3589]: W1105 15:07:26.933052 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:26.933283 kubelet[3589]: E1105 15:07:26.933268 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:26.934496 kubelet[3589]: E1105 15:07:26.934479 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:26.934764 kubelet[3589]: W1105 15:07:26.934692 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:26.934764 kubelet[3589]: E1105 15:07:26.934714 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:26.935513 kubelet[3589]: E1105 15:07:26.935195 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:26.935605 kubelet[3589]: W1105 15:07:26.935589 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:26.935660 kubelet[3589]: E1105 15:07:26.935649 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:26.936433 kubelet[3589]: E1105 15:07:26.936417 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:26.938279 kubelet[3589]: W1105 15:07:26.938259 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:26.939370 kubelet[3589]: E1105 15:07:26.938374 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:26.941256 kubelet[3589]: E1105 15:07:26.941087 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:26.941256 kubelet[3589]: W1105 15:07:26.941103 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:26.941256 kubelet[3589]: E1105 15:07:26.941117 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:26.946693 kubelet[3589]: E1105 15:07:26.946674 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:26.947350 kubelet[3589]: W1105 15:07:26.947328 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:26.947494 kubelet[3589]: E1105 15:07:26.947481 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.031023 kubelet[3589]: E1105 15:07:27.030881 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.031023 kubelet[3589]: W1105 15:07:27.030905 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.031023 kubelet[3589]: E1105 15:07:27.030924 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.031317 kubelet[3589]: E1105 15:07:27.031302 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.031396 kubelet[3589]: W1105 15:07:27.031384 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.033157 kubelet[3589]: E1105 15:07:27.031424 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.033157 kubelet[3589]: E1105 15:07:27.031609 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.033157 kubelet[3589]: W1105 15:07:27.031618 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.033157 kubelet[3589]: E1105 15:07:27.031626 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.033157 kubelet[3589]: E1105 15:07:27.031780 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.033157 kubelet[3589]: W1105 15:07:27.031787 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.033157 kubelet[3589]: E1105 15:07:27.031794 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.033157 kubelet[3589]: E1105 15:07:27.031915 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.033157 kubelet[3589]: W1105 15:07:27.031921 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.033157 kubelet[3589]: E1105 15:07:27.031927 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.033336 kubelet[3589]: E1105 15:07:27.032077 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.033336 kubelet[3589]: W1105 15:07:27.032084 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.033336 kubelet[3589]: E1105 15:07:27.032090 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.033336 kubelet[3589]: E1105 15:07:27.032194 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.033336 kubelet[3589]: W1105 15:07:27.032200 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.033336 kubelet[3589]: E1105 15:07:27.032205 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.033336 kubelet[3589]: E1105 15:07:27.032303 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.033336 kubelet[3589]: W1105 15:07:27.032308 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.033336 kubelet[3589]: E1105 15:07:27.032313 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.033863 kubelet[3589]: E1105 15:07:27.033742 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.033863 kubelet[3589]: W1105 15:07:27.033758 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.033863 kubelet[3589]: E1105 15:07:27.033768 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.034021 kubelet[3589]: E1105 15:07:27.034009 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.034064 kubelet[3589]: W1105 15:07:27.034055 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.034101 kubelet[3589]: E1105 15:07:27.034092 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.034270 kubelet[3589]: E1105 15:07:27.034260 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.034440 kubelet[3589]: W1105 15:07:27.034326 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.034440 kubelet[3589]: E1105 15:07:27.034341 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.034901 kubelet[3589]: E1105 15:07:27.034890 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.034969 kubelet[3589]: W1105 15:07:27.034947 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.035042 kubelet[3589]: E1105 15:07:27.035032 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.035314 containerd[2056]: time="2025-11-05T15:07:27.035279551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sq28p,Uid:4f00e1f1-d81e-417e-b7a7-8dc3256d1d09,Namespace:calico-system,Attempt:0,}" Nov 5 15:07:27.035689 kubelet[3589]: E1105 15:07:27.035411 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.035689 kubelet[3589]: W1105 15:07:27.035421 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.035689 kubelet[3589]: E1105 15:07:27.035432 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.035845 kubelet[3589]: E1105 15:07:27.035827 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.035873 kubelet[3589]: W1105 15:07:27.035843 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.035873 kubelet[3589]: E1105 15:07:27.035854 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.036040 kubelet[3589]: E1105 15:07:27.036026 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.036040 kubelet[3589]: W1105 15:07:27.036037 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.036135 kubelet[3589]: E1105 15:07:27.036045 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.036204 kubelet[3589]: E1105 15:07:27.036178 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.036204 kubelet[3589]: W1105 15:07:27.036200 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.036244 kubelet[3589]: E1105 15:07:27.036208 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.036422 kubelet[3589]: E1105 15:07:27.036335 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.036422 kubelet[3589]: W1105 15:07:27.036354 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.036422 kubelet[3589]: E1105 15:07:27.036376 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.036525 kubelet[3589]: E1105 15:07:27.036511 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.036525 kubelet[3589]: W1105 15:07:27.036520 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.036659 kubelet[3589]: E1105 15:07:27.036527 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.036729 kubelet[3589]: E1105 15:07:27.036716 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.036758 kubelet[3589]: W1105 15:07:27.036725 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.036758 kubelet[3589]: E1105 15:07:27.036739 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.037235 kubelet[3589]: E1105 15:07:27.037201 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.037235 kubelet[3589]: W1105 15:07:27.037214 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.037235 kubelet[3589]: E1105 15:07:27.037225 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.038509 kubelet[3589]: E1105 15:07:27.038444 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.038509 kubelet[3589]: W1105 15:07:27.038458 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.038509 kubelet[3589]: E1105 15:07:27.038469 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.039143 kubelet[3589]: E1105 15:07:27.039123 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.039265 kubelet[3589]: W1105 15:07:27.039207 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.039410 kubelet[3589]: E1105 15:07:27.039353 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.040716 kubelet[3589]: E1105 15:07:27.040700 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.040916 kubelet[3589]: W1105 15:07:27.040783 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.040916 kubelet[3589]: E1105 15:07:27.040798 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.041194 kubelet[3589]: E1105 15:07:27.041183 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.041333 kubelet[3589]: W1105 15:07:27.041259 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.041333 kubelet[3589]: E1105 15:07:27.041273 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.041638 kubelet[3589]: E1105 15:07:27.041626 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.041757 kubelet[3589]: W1105 15:07:27.041720 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.041757 kubelet[3589]: E1105 15:07:27.041737 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.046903 kubelet[3589]: E1105 15:07:27.046847 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.046903 kubelet[3589]: W1105 15:07:27.046863 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.046903 kubelet[3589]: E1105 15:07:27.046876 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.076792 containerd[2056]: time="2025-11-05T15:07:27.076731468Z" level=info msg="connecting to shim 3790b4f496f0883ca10da141d52b935ce61b8363f69dab327517dede66be7e24" address="unix:///run/containerd/s/1a92ca590d7deef81841d03094d13c598b64ac8ed3d6c993c8e9e28aaf1a68f1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:07:27.092502 systemd[1]: Started cri-containerd-3790b4f496f0883ca10da141d52b935ce61b8363f69dab327517dede66be7e24.scope - libcontainer container 3790b4f496f0883ca10da141d52b935ce61b8363f69dab327517dede66be7e24. Nov 5 15:07:27.117055 containerd[2056]: time="2025-11-05T15:07:27.116995181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sq28p,Uid:4f00e1f1-d81e-417e-b7a7-8dc3256d1d09,Namespace:calico-system,Attempt:0,} returns sandbox id \"3790b4f496f0883ca10da141d52b935ce61b8363f69dab327517dede66be7e24\"" Nov 5 15:07:27.119007 containerd[2056]: time="2025-11-05T15:07:27.118861162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 15:07:27.728438 kubelet[3589]: E1105 15:07:27.728409 3589 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Nov 5 15:07:27.728877 kubelet[3589]: E1105 15:07:27.728490 3589 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a1934ee-c6a7-4c17-b8b2-8d0b2296aa78-typha-certs podName:9a1934ee-c6a7-4c17-b8b2-8d0b2296aa78 nodeName:}" failed. No retries permitted until 2025-11-05 15:07:28.228469706 +0000 UTC m=+21.998416951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/9a1934ee-c6a7-4c17-b8b2-8d0b2296aa78-typha-certs") pod "calico-typha-7dc759f956-khj5v" (UID: "9a1934ee-c6a7-4c17-b8b2-8d0b2296aa78") : failed to sync secret cache: timed out waiting for the condition Nov 5 15:07:27.736704 kubelet[3589]: E1105 15:07:27.736676 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.736704 kubelet[3589]: W1105 15:07:27.736695 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.736837 kubelet[3589]: E1105 15:07:27.736713 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.837477 kubelet[3589]: E1105 15:07:27.837388 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.837477 kubelet[3589]: W1105 15:07:27.837411 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.837477 kubelet[3589]: E1105 15:07:27.837431 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:27.938496 kubelet[3589]: E1105 15:07:27.938399 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:27.938496 kubelet[3589]: W1105 15:07:27.938423 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:27.938496 kubelet[3589]: E1105 15:07:27.938446 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:28.039919 kubelet[3589]: E1105 15:07:28.039835 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:28.039919 kubelet[3589]: W1105 15:07:28.039855 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:28.039919 kubelet[3589]: E1105 15:07:28.039871 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:28.141322 kubelet[3589]: E1105 15:07:28.141284 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:28.141322 kubelet[3589]: W1105 15:07:28.141304 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:28.141322 kubelet[3589]: E1105 15:07:28.141320 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:28.242130 kubelet[3589]: E1105 15:07:28.242049 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:28.242130 kubelet[3589]: W1105 15:07:28.242070 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:28.242130 kubelet[3589]: E1105 15:07:28.242088 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:28.242315 kubelet[3589]: E1105 15:07:28.242281 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:28.242315 kubelet[3589]: W1105 15:07:28.242288 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:28.242315 kubelet[3589]: E1105 15:07:28.242295 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:28.242425 kubelet[3589]: E1105 15:07:28.242413 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:28.242425 kubelet[3589]: W1105 15:07:28.242422 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:28.242479 kubelet[3589]: E1105 15:07:28.242427 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:28.242527 kubelet[3589]: E1105 15:07:28.242514 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:28.242527 kubelet[3589]: W1105 15:07:28.242522 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:28.242527 kubelet[3589]: E1105 15:07:28.242527 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:28.242649 kubelet[3589]: E1105 15:07:28.242638 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:28.242649 kubelet[3589]: W1105 15:07:28.242647 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:28.242692 kubelet[3589]: E1105 15:07:28.242652 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:28.247457 kubelet[3589]: E1105 15:07:28.247435 3589 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:07:28.247457 kubelet[3589]: W1105 15:07:28.247452 3589 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:07:28.247554 kubelet[3589]: E1105 15:07:28.247465 3589 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:07:28.391774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782422706.mount: Deactivated successfully. Nov 5 15:07:28.420270 containerd[2056]: time="2025-11-05T15:07:28.420214142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dc759f956-khj5v,Uid:9a1934ee-c6a7-4c17-b8b2-8d0b2296aa78,Namespace:calico-system,Attempt:0,}" Nov 5 15:07:28.474953 containerd[2056]: time="2025-11-05T15:07:28.474899572Z" level=info msg="connecting to shim 7ad835cb996d0c4aeefd0a3e9be3852aacfac2d81905a2901704e650b751cbb3" address="unix:///run/containerd/s/4ba593ff3123c7d00ba5e630175454fad894bf94e28d37665da17bd4476f2510" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:07:28.506531 systemd[1]: Started cri-containerd-7ad835cb996d0c4aeefd0a3e9be3852aacfac2d81905a2901704e650b751cbb3.scope - libcontainer container 7ad835cb996d0c4aeefd0a3e9be3852aacfac2d81905a2901704e650b751cbb3. Nov 5 15:07:28.542851 containerd[2056]: time="2025-11-05T15:07:28.542189298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:28.546689 containerd[2056]: time="2025-11-05T15:07:28.545677924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Nov 5 15:07:28.548671 containerd[2056]: time="2025-11-05T15:07:28.548281289Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:28.552173 containerd[2056]: time="2025-11-05T15:07:28.552137851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:28.553405 containerd[2056]: time="2025-11-05T15:07:28.553285246Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.434396548s" Nov 5 15:07:28.553405 containerd[2056]: time="2025-11-05T15:07:28.553314719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 5 15:07:28.562475 containerd[2056]: time="2025-11-05T15:07:28.562441502Z" level=info msg="CreateContainer within sandbox \"3790b4f496f0883ca10da141d52b935ce61b8363f69dab327517dede66be7e24\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 15:07:28.564045 containerd[2056]: time="2025-11-05T15:07:28.564018379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dc759f956-khj5v,Uid:9a1934ee-c6a7-4c17-b8b2-8d0b2296aa78,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ad835cb996d0c4aeefd0a3e9be3852aacfac2d81905a2901704e650b751cbb3\"" Nov 5 15:07:28.566383 containerd[2056]: time="2025-11-05T15:07:28.566349265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 15:07:28.586903 containerd[2056]: time="2025-11-05T15:07:28.586867044Z" level=info msg="Container f6f74958853beb7ba59b83d78f5b3def0629e6be946717ca89ac7aebb5ddd7e9: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:07:28.603378 containerd[2056]: time="2025-11-05T15:07:28.603181747Z" level=info msg="CreateContainer within sandbox \"3790b4f496f0883ca10da141d52b935ce61b8363f69dab327517dede66be7e24\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f6f74958853beb7ba59b83d78f5b3def0629e6be946717ca89ac7aebb5ddd7e9\"" Nov 5 15:07:28.606526 containerd[2056]: time="2025-11-05T15:07:28.606447936Z" level=info msg="StartContainer for \"f6f74958853beb7ba59b83d78f5b3def0629e6be946717ca89ac7aebb5ddd7e9\"" Nov 5 15:07:28.608021 containerd[2056]: time="2025-11-05T15:07:28.607990692Z" level=info msg="connecting to shim f6f74958853beb7ba59b83d78f5b3def0629e6be946717ca89ac7aebb5ddd7e9" address="unix:///run/containerd/s/1a92ca590d7deef81841d03094d13c598b64ac8ed3d6c993c8e9e28aaf1a68f1" protocol=ttrpc version=3 Nov 5 15:07:28.625488 systemd[1]: Started cri-containerd-f6f74958853beb7ba59b83d78f5b3def0629e6be946717ca89ac7aebb5ddd7e9.scope - libcontainer container f6f74958853beb7ba59b83d78f5b3def0629e6be946717ca89ac7aebb5ddd7e9. Nov 5 15:07:28.661171 containerd[2056]: time="2025-11-05T15:07:28.659896968Z" level=info msg="StartContainer for \"f6f74958853beb7ba59b83d78f5b3def0629e6be946717ca89ac7aebb5ddd7e9\" returns successfully" Nov 5 15:07:28.669010 systemd[1]: cri-containerd-f6f74958853beb7ba59b83d78f5b3def0629e6be946717ca89ac7aebb5ddd7e9.scope: Deactivated successfully. Nov 5 15:07:28.672258 containerd[2056]: time="2025-11-05T15:07:28.672169329Z" level=info msg="received exit event container_id:\"f6f74958853beb7ba59b83d78f5b3def0629e6be946717ca89ac7aebb5ddd7e9\" id:\"f6f74958853beb7ba59b83d78f5b3def0629e6be946717ca89ac7aebb5ddd7e9\" pid:4150 exited_at:{seconds:1762355248 nanos:671681733}" Nov 5 15:07:28.672866 containerd[2056]: time="2025-11-05T15:07:28.672845713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6f74958853beb7ba59b83d78f5b3def0629e6be946717ca89ac7aebb5ddd7e9\" id:\"f6f74958853beb7ba59b83d78f5b3def0629e6be946717ca89ac7aebb5ddd7e9\" pid:4150 exited_at:{seconds:1762355248 nanos:671681733}" Nov 5 15:07:29.321764 kubelet[3589]: E1105 15:07:29.321710 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:07:30.707781 containerd[2056]: time="2025-11-05T15:07:30.707731289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:30.710451 containerd[2056]: time="2025-11-05T15:07:30.710353447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31720858" Nov 5 15:07:30.713154 containerd[2056]: time="2025-11-05T15:07:30.713112783Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:30.716839 containerd[2056]: time="2025-11-05T15:07:30.716803950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:30.717327 containerd[2056]: time="2025-11-05T15:07:30.717299274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.150913272s" Nov 5 15:07:30.717445 containerd[2056]: time="2025-11-05T15:07:30.717429221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 5 15:07:30.718724 containerd[2056]: time="2025-11-05T15:07:30.718634393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 15:07:30.736594 containerd[2056]: time="2025-11-05T15:07:30.736562911Z" level=info msg="CreateContainer within sandbox \"7ad835cb996d0c4aeefd0a3e9be3852aacfac2d81905a2901704e650b751cbb3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 15:07:30.755017 containerd[2056]: time="2025-11-05T15:07:30.754506340Z" level=info msg="Container acc0422e40c1a1b49eda31d16e550f8421a240aaa7bdb0d3e75d4f87a1914dc7: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:07:30.771891 containerd[2056]: time="2025-11-05T15:07:30.771853972Z" level=info msg="CreateContainer within sandbox \"7ad835cb996d0c4aeefd0a3e9be3852aacfac2d81905a2901704e650b751cbb3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"acc0422e40c1a1b49eda31d16e550f8421a240aaa7bdb0d3e75d4f87a1914dc7\"" Nov 5 15:07:30.772497 containerd[2056]: time="2025-11-05T15:07:30.772473195Z" level=info msg="StartContainer for \"acc0422e40c1a1b49eda31d16e550f8421a240aaa7bdb0d3e75d4f87a1914dc7\"" Nov 5 15:07:30.773179 containerd[2056]: time="2025-11-05T15:07:30.773149891Z" level=info msg="connecting to shim acc0422e40c1a1b49eda31d16e550f8421a240aaa7bdb0d3e75d4f87a1914dc7" address="unix:///run/containerd/s/4ba593ff3123c7d00ba5e630175454fad894bf94e28d37665da17bd4476f2510" protocol=ttrpc version=3 Nov 5 15:07:30.792487 systemd[1]: Started cri-containerd-acc0422e40c1a1b49eda31d16e550f8421a240aaa7bdb0d3e75d4f87a1914dc7.scope - libcontainer container acc0422e40c1a1b49eda31d16e550f8421a240aaa7bdb0d3e75d4f87a1914dc7. Nov 5 15:07:30.826614 containerd[2056]: time="2025-11-05T15:07:30.826578859Z" level=info msg="StartContainer for \"acc0422e40c1a1b49eda31d16e550f8421a240aaa7bdb0d3e75d4f87a1914dc7\" returns successfully" Nov 5 15:07:31.321954 kubelet[3589]: E1105 15:07:31.321901 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:07:31.423381 kubelet[3589]: I1105 15:07:31.422859 3589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7dc759f956-khj5v" podStartSLOduration=3.270759151 podStartE2EDuration="5.422845642s" podCreationTimestamp="2025-11-05 15:07:26 +0000 UTC" firstStartedPulling="2025-11-05 15:07:28.565879782 +0000 UTC m=+22.335827027" lastFinishedPulling="2025-11-05 15:07:30.717966273 +0000 UTC m=+24.487913518" observedRunningTime="2025-11-05 15:07:31.422235396 +0000 UTC m=+25.192182641" watchObservedRunningTime="2025-11-05 15:07:31.422845642 +0000 UTC m=+25.192792887" Nov 5 15:07:32.979637 containerd[2056]: time="2025-11-05T15:07:32.979153705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:32.981740 containerd[2056]: time="2025-11-05T15:07:32.981717269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 5 15:07:32.984626 containerd[2056]: time="2025-11-05T15:07:32.984598953Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:32.988253 containerd[2056]: time="2025-11-05T15:07:32.988208550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:32.988656 containerd[2056]: time="2025-11-05T15:07:32.988519021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.269844555s" Nov 5 15:07:32.988656 containerd[2056]: time="2025-11-05T15:07:32.988545422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 5 15:07:32.997388 containerd[2056]: time="2025-11-05T15:07:32.996731942Z" level=info msg="CreateContainer within sandbox \"3790b4f496f0883ca10da141d52b935ce61b8363f69dab327517dede66be7e24\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 15:07:33.016055 containerd[2056]: time="2025-11-05T15:07:33.016019803Z" level=info msg="Container c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:07:33.032806 containerd[2056]: time="2025-11-05T15:07:33.032766853Z" level=info msg="CreateContainer within sandbox \"3790b4f496f0883ca10da141d52b935ce61b8363f69dab327517dede66be7e24\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e\"" Nov 5 15:07:33.033641 containerd[2056]: time="2025-11-05T15:07:33.033616753Z" level=info msg="StartContainer for \"c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e\"" Nov 5 15:07:33.035748 containerd[2056]: time="2025-11-05T15:07:33.035718818Z" level=info msg="connecting to shim c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e" address="unix:///run/containerd/s/1a92ca590d7deef81841d03094d13c598b64ac8ed3d6c993c8e9e28aaf1a68f1" protocol=ttrpc version=3 Nov 5 15:07:33.052496 systemd[1]: Started cri-containerd-c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e.scope - libcontainer container c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e. Nov 5 15:07:33.085113 containerd[2056]: time="2025-11-05T15:07:33.085073915Z" level=info msg="StartContainer for \"c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e\" returns successfully" Nov 5 15:07:33.321796 kubelet[3589]: E1105 15:07:33.321739 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:07:34.157538 systemd[1]: cri-containerd-c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e.scope: Deactivated successfully. Nov 5 15:07:34.159054 containerd[2056]: time="2025-11-05T15:07:34.158374824Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e\" id:\"c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e\" pid:4253 exited_at:{seconds:1762355254 nanos:158051136}" Nov 5 15:07:34.159054 containerd[2056]: time="2025-11-05T15:07:34.158491154Z" level=info msg="received exit event container_id:\"c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e\" id:\"c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e\" pid:4253 exited_at:{seconds:1762355254 nanos:158051136}" Nov 5 15:07:34.158456 systemd[1]: cri-containerd-c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e.scope: Consumed 318ms CPU time, 189M memory peak, 165.9M written to disk. Nov 5 15:07:34.177705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c69645b563414c5d18ef37ff0ed533bb403a22f388c4cda5ab6e521b83b1077e-rootfs.mount: Deactivated successfully. Nov 5 15:07:34.212736 kubelet[3589]: I1105 15:07:34.212700 3589 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 5 15:07:35.412063 systemd[1]: Created slice kubepods-burstable-pod397fff19_ce83_432c_bb32_ac5bd613b927.slice - libcontainer container kubepods-burstable-pod397fff19_ce83_432c_bb32_ac5bd613b927.slice. Nov 5 15:07:35.418451 systemd[1]: Created slice kubepods-besteffort-pod5175072d_c97d_4e97_bbe9_4eb6c98f1e6a.slice - libcontainer container kubepods-besteffort-pod5175072d_c97d_4e97_bbe9_4eb6c98f1e6a.slice. Nov 5 15:07:35.461045 containerd[2056]: time="2025-11-05T15:07:35.461002082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nr498,Uid:5175072d-c97d-4e97-bbe9-4eb6c98f1e6a,Namespace:calico-system,Attempt:0,}" Nov 5 15:07:35.472586 systemd[1]: Created slice kubepods-besteffort-pod460d4776_c8b3_4dec_911d_f1ebdf0cfa3b.slice - libcontainer container kubepods-besteffort-pod460d4776_c8b3_4dec_911d_f1ebdf0cfa3b.slice. Nov 5 15:07:35.484014 systemd[1]: Created slice kubepods-besteffort-pod9ae68c23_7222_4e65_9b08_63622fb7b33b.slice - libcontainer container kubepods-besteffort-pod9ae68c23_7222_4e65_9b08_63622fb7b33b.slice. Nov 5 15:07:35.490394 kubelet[3589]: I1105 15:07:35.489163 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwlpq\" (UniqueName: \"kubernetes.io/projected/9ae68c23-7222-4e65-9b08-63622fb7b33b-kube-api-access-pwlpq\") pod \"whisker-547ffb4db4-98mg5\" (UID: \"9ae68c23-7222-4e65-9b08-63622fb7b33b\") " pod="calico-system/whisker-547ffb4db4-98mg5" Nov 5 15:07:35.490394 kubelet[3589]: I1105 15:07:35.489197 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/397fff19-ce83-432c-bb32-ac5bd613b927-config-volume\") pod \"coredns-66bc5c9577-lzrs9\" (UID: \"397fff19-ce83-432c-bb32-ac5bd613b927\") " pod="kube-system/coredns-66bc5c9577-lzrs9" Nov 5 15:07:35.490394 kubelet[3589]: I1105 15:07:35.489212 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgflv\" (UniqueName: \"kubernetes.io/projected/397fff19-ce83-432c-bb32-ac5bd613b927-kube-api-access-sgflv\") pod \"coredns-66bc5c9577-lzrs9\" (UID: \"397fff19-ce83-432c-bb32-ac5bd613b927\") " pod="kube-system/coredns-66bc5c9577-lzrs9" Nov 5 15:07:35.490394 kubelet[3589]: I1105 15:07:35.489221 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460d4776-c8b3-4dec-911d-f1ebdf0cfa3b-config\") pod \"goldmane-7c778bb748-lg4lc\" (UID: \"460d4776-c8b3-4dec-911d-f1ebdf0cfa3b\") " pod="calico-system/goldmane-7c778bb748-lg4lc" Nov 5 15:07:35.490394 kubelet[3589]: I1105 15:07:35.489229 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/460d4776-c8b3-4dec-911d-f1ebdf0cfa3b-goldmane-key-pair\") pod \"goldmane-7c778bb748-lg4lc\" (UID: \"460d4776-c8b3-4dec-911d-f1ebdf0cfa3b\") " pod="calico-system/goldmane-7c778bb748-lg4lc" Nov 5 15:07:35.492763 kubelet[3589]: I1105 15:07:35.489241 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ae68c23-7222-4e65-9b08-63622fb7b33b-whisker-backend-key-pair\") pod \"whisker-547ffb4db4-98mg5\" (UID: \"9ae68c23-7222-4e65-9b08-63622fb7b33b\") " pod="calico-system/whisker-547ffb4db4-98mg5" Nov 5 15:07:35.492763 kubelet[3589]: I1105 15:07:35.489259 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ae68c23-7222-4e65-9b08-63622fb7b33b-whisker-ca-bundle\") pod \"whisker-547ffb4db4-98mg5\" (UID: \"9ae68c23-7222-4e65-9b08-63622fb7b33b\") " pod="calico-system/whisker-547ffb4db4-98mg5" Nov 5 15:07:35.492763 kubelet[3589]: I1105 15:07:35.489271 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7w9s\" (UniqueName: \"kubernetes.io/projected/460d4776-c8b3-4dec-911d-f1ebdf0cfa3b-kube-api-access-s7w9s\") pod \"goldmane-7c778bb748-lg4lc\" (UID: \"460d4776-c8b3-4dec-911d-f1ebdf0cfa3b\") " pod="calico-system/goldmane-7c778bb748-lg4lc" Nov 5 15:07:35.492763 kubelet[3589]: I1105 15:07:35.489281 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5f8444f4-0b9f-4af6-a67a-d71e5d4f1309-calico-apiserver-certs\") pod \"calico-apiserver-7c94bc65c5-b4vch\" (UID: \"5f8444f4-0b9f-4af6-a67a-d71e5d4f1309\") " pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" Nov 5 15:07:35.492763 kubelet[3589]: I1105 15:07:35.489291 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/460d4776-c8b3-4dec-911d-f1ebdf0cfa3b-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-lg4lc\" (UID: \"460d4776-c8b3-4dec-911d-f1ebdf0cfa3b\") " pod="calico-system/goldmane-7c778bb748-lg4lc" Nov 5 15:07:35.492088 systemd[1]: Created slice kubepods-besteffort-pod5f8444f4_0b9f_4af6_a67a_d71e5d4f1309.slice - libcontainer container kubepods-besteffort-pod5f8444f4_0b9f_4af6_a67a_d71e5d4f1309.slice. Nov 5 15:07:35.492896 kubelet[3589]: I1105 15:07:35.489301 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsmgk\" (UniqueName: \"kubernetes.io/projected/5f8444f4-0b9f-4af6-a67a-d71e5d4f1309-kube-api-access-bsmgk\") pod \"calico-apiserver-7c94bc65c5-b4vch\" (UID: \"5f8444f4-0b9f-4af6-a67a-d71e5d4f1309\") " pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" Nov 5 15:07:35.499067 systemd[1]: Created slice kubepods-burstable-pod67043604_b63e_4022_91e3_c77c7a05a34a.slice - libcontainer container kubepods-burstable-pod67043604_b63e_4022_91e3_c77c7a05a34a.slice. Nov 5 15:07:35.511817 systemd[1]: Created slice kubepods-besteffort-podc1f198dc_261d_4ac5_8860_91734a6c009d.slice - libcontainer container kubepods-besteffort-podc1f198dc_261d_4ac5_8860_91734a6c009d.slice. Nov 5 15:07:35.517494 systemd[1]: Created slice kubepods-besteffort-pod918f7e6e_ae2a_455d_8758_01b9af03afc5.slice - libcontainer container kubepods-besteffort-pod918f7e6e_ae2a_455d_8758_01b9af03afc5.slice. Nov 5 15:07:35.526271 systemd[1]: Created slice kubepods-besteffort-pod38f05b09_e539_4b0f_aa00_1af242dcf380.slice - libcontainer container kubepods-besteffort-pod38f05b09_e539_4b0f_aa00_1af242dcf380.slice. Nov 5 15:07:35.549441 containerd[2056]: time="2025-11-05T15:07:35.549002935Z" level=error msg="Failed to destroy network for sandbox \"4087c8f51fb4cf04f422b3dda216bbd27b7cea9719c9f89cf675a7177a687ed2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.551193 systemd[1]: run-netns-cni\x2d9050d1cf\x2d2a2b\x2db656\x2d0a07\x2da288a0889d2e.mount: Deactivated successfully. Nov 5 15:07:35.554249 containerd[2056]: time="2025-11-05T15:07:35.554174241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nr498,Uid:5175072d-c97d-4e97-bbe9-4eb6c98f1e6a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4087c8f51fb4cf04f422b3dda216bbd27b7cea9719c9f89cf675a7177a687ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.554577 kubelet[3589]: E1105 15:07:35.554546 3589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4087c8f51fb4cf04f422b3dda216bbd27b7cea9719c9f89cf675a7177a687ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.554731 kubelet[3589]: E1105 15:07:35.554688 3589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4087c8f51fb4cf04f422b3dda216bbd27b7cea9719c9f89cf675a7177a687ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nr498" Nov 5 15:07:35.554731 kubelet[3589]: E1105 15:07:35.554713 3589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4087c8f51fb4cf04f422b3dda216bbd27b7cea9719c9f89cf675a7177a687ed2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nr498" Nov 5 15:07:35.554881 kubelet[3589]: E1105 15:07:35.554851 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nr498_calico-system(5175072d-c97d-4e97-bbe9-4eb6c98f1e6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nr498_calico-system(5175072d-c97d-4e97-bbe9-4eb6c98f1e6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4087c8f51fb4cf04f422b3dda216bbd27b7cea9719c9f89cf675a7177a687ed2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:07:35.590384 kubelet[3589]: I1105 15:07:35.590166 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf2wr\" (UniqueName: \"kubernetes.io/projected/38f05b09-e539-4b0f-aa00-1af242dcf380-kube-api-access-mf2wr\") pod \"calico-apiserver-7c94bc65c5-xbp7r\" (UID: \"38f05b09-e539-4b0f-aa00-1af242dcf380\") " pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" Nov 5 15:07:35.590384 kubelet[3589]: I1105 15:07:35.590223 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvv8z\" (UniqueName: \"kubernetes.io/projected/918f7e6e-ae2a-455d-8758-01b9af03afc5-kube-api-access-tvv8z\") pod \"calico-kube-controllers-84577cbdbb-g8xvq\" (UID: \"918f7e6e-ae2a-455d-8758-01b9af03afc5\") " pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" Nov 5 15:07:35.590384 kubelet[3589]: I1105 15:07:35.590242 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67043604-b63e-4022-91e3-c77c7a05a34a-config-volume\") pod \"coredns-66bc5c9577-72tws\" (UID: \"67043604-b63e-4022-91e3-c77c7a05a34a\") " pod="kube-system/coredns-66bc5c9577-72tws" Nov 5 15:07:35.590384 kubelet[3589]: I1105 15:07:35.590253 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mvq9\" (UniqueName: \"kubernetes.io/projected/67043604-b63e-4022-91e3-c77c7a05a34a-kube-api-access-4mvq9\") pod \"coredns-66bc5c9577-72tws\" (UID: \"67043604-b63e-4022-91e3-c77c7a05a34a\") " pod="kube-system/coredns-66bc5c9577-72tws" Nov 5 15:07:35.590384 kubelet[3589]: I1105 15:07:35.590269 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckpmd\" (UniqueName: \"kubernetes.io/projected/c1f198dc-261d-4ac5-8860-91734a6c009d-kube-api-access-ckpmd\") pod \"calico-apiserver-65d654bb8-d6fmx\" (UID: \"c1f198dc-261d-4ac5-8860-91734a6c009d\") " pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" Nov 5 15:07:35.590812 kubelet[3589]: I1105 15:07:35.590308 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/38f05b09-e539-4b0f-aa00-1af242dcf380-calico-apiserver-certs\") pod \"calico-apiserver-7c94bc65c5-xbp7r\" (UID: \"38f05b09-e539-4b0f-aa00-1af242dcf380\") " pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" Nov 5 15:07:35.590812 kubelet[3589]: I1105 15:07:35.590320 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/918f7e6e-ae2a-455d-8758-01b9af03afc5-tigera-ca-bundle\") pod \"calico-kube-controllers-84577cbdbb-g8xvq\" (UID: \"918f7e6e-ae2a-455d-8758-01b9af03afc5\") " pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" Nov 5 15:07:35.590812 kubelet[3589]: I1105 15:07:35.590330 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c1f198dc-261d-4ac5-8860-91734a6c009d-calico-apiserver-certs\") pod \"calico-apiserver-65d654bb8-d6fmx\" (UID: \"c1f198dc-261d-4ac5-8860-91734a6c009d\") " pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" Nov 5 15:07:35.722592 containerd[2056]: time="2025-11-05T15:07:35.722477549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lzrs9,Uid:397fff19-ce83-432c-bb32-ac5bd613b927,Namespace:kube-system,Attempt:0,}" Nov 5 15:07:35.763009 containerd[2056]: time="2025-11-05T15:07:35.762957404Z" level=error msg="Failed to destroy network for sandbox \"d0a1c7bc1066511f7261b5cc0c2a0999b800e12a1c6a43298ef9736c02418594\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.766368 containerd[2056]: time="2025-11-05T15:07:35.766330067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lzrs9,Uid:397fff19-ce83-432c-bb32-ac5bd613b927,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a1c7bc1066511f7261b5cc0c2a0999b800e12a1c6a43298ef9736c02418594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.767377 kubelet[3589]: E1105 15:07:35.766577 3589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a1c7bc1066511f7261b5cc0c2a0999b800e12a1c6a43298ef9736c02418594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.767377 kubelet[3589]: E1105 15:07:35.766655 3589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a1c7bc1066511f7261b5cc0c2a0999b800e12a1c6a43298ef9736c02418594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lzrs9" Nov 5 15:07:35.767377 kubelet[3589]: E1105 15:07:35.766670 3589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0a1c7bc1066511f7261b5cc0c2a0999b800e12a1c6a43298ef9736c02418594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lzrs9" Nov 5 15:07:35.767517 kubelet[3589]: E1105 15:07:35.766718 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-lzrs9_kube-system(397fff19-ce83-432c-bb32-ac5bd613b927)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-lzrs9_kube-system(397fff19-ce83-432c-bb32-ac5bd613b927)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0a1c7bc1066511f7261b5cc0c2a0999b800e12a1c6a43298ef9736c02418594\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lzrs9" podUID="397fff19-ce83-432c-bb32-ac5bd613b927" Nov 5 15:07:35.788005 containerd[2056]: time="2025-11-05T15:07:35.787973376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lg4lc,Uid:460d4776-c8b3-4dec-911d-f1ebdf0cfa3b,Namespace:calico-system,Attempt:0,}" Nov 5 15:07:35.803174 containerd[2056]: time="2025-11-05T15:07:35.803140981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c94bc65c5-b4vch,Uid:5f8444f4-0b9f-4af6-a67a-d71e5d4f1309,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:07:35.803530 containerd[2056]: time="2025-11-05T15:07:35.803477396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-547ffb4db4-98mg5,Uid:9ae68c23-7222-4e65-9b08-63622fb7b33b,Namespace:calico-system,Attempt:0,}" Nov 5 15:07:35.810009 containerd[2056]: time="2025-11-05T15:07:35.809908804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-72tws,Uid:67043604-b63e-4022-91e3-c77c7a05a34a,Namespace:kube-system,Attempt:0,}" Nov 5 15:07:35.822643 containerd[2056]: time="2025-11-05T15:07:35.822613550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65d654bb8-d6fmx,Uid:c1f198dc-261d-4ac5-8860-91734a6c009d,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:07:35.829626 containerd[2056]: time="2025-11-05T15:07:35.829312252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84577cbdbb-g8xvq,Uid:918f7e6e-ae2a-455d-8758-01b9af03afc5,Namespace:calico-system,Attempt:0,}" Nov 5 15:07:35.835003 containerd[2056]: time="2025-11-05T15:07:35.834710627Z" level=error msg="Failed to destroy network for sandbox \"7685a4b736de1397e4a36ff61cb80af6473c004c99b9af260350494e19f50b55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.835192 containerd[2056]: time="2025-11-05T15:07:35.834795021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c94bc65c5-xbp7r,Uid:38f05b09-e539-4b0f-aa00-1af242dcf380,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:07:35.857740 containerd[2056]: time="2025-11-05T15:07:35.857696463Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lg4lc,Uid:460d4776-c8b3-4dec-911d-f1ebdf0cfa3b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7685a4b736de1397e4a36ff61cb80af6473c004c99b9af260350494e19f50b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.859664 kubelet[3589]: E1105 15:07:35.857991 3589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7685a4b736de1397e4a36ff61cb80af6473c004c99b9af260350494e19f50b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.859664 kubelet[3589]: E1105 15:07:35.858036 3589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7685a4b736de1397e4a36ff61cb80af6473c004c99b9af260350494e19f50b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-lg4lc" Nov 5 15:07:35.859664 kubelet[3589]: E1105 15:07:35.858050 3589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7685a4b736de1397e4a36ff61cb80af6473c004c99b9af260350494e19f50b55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-lg4lc" Nov 5 15:07:35.859759 kubelet[3589]: E1105 15:07:35.858089 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-lg4lc_calico-system(460d4776-c8b3-4dec-911d-f1ebdf0cfa3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-lg4lc_calico-system(460d4776-c8b3-4dec-911d-f1ebdf0cfa3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7685a4b736de1397e4a36ff61cb80af6473c004c99b9af260350494e19f50b55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-lg4lc" podUID="460d4776-c8b3-4dec-911d-f1ebdf0cfa3b" Nov 5 15:07:35.903649 containerd[2056]: time="2025-11-05T15:07:35.903544157Z" level=error msg="Failed to destroy network for sandbox \"de1e269927a1c9cc81921d8188753dc3341dc435c5985e2699fdd74009851eea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.905238 containerd[2056]: time="2025-11-05T15:07:35.905160619Z" level=error msg="Failed to destroy network for sandbox \"206dd0f3d5c4eee574acf1d6c7d25d73b9fa04f3c9d76ebfcc784556e9dbebca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.909878 containerd[2056]: time="2025-11-05T15:07:35.909844441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-72tws,Uid:67043604-b63e-4022-91e3-c77c7a05a34a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de1e269927a1c9cc81921d8188753dc3341dc435c5985e2699fdd74009851eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.910481 kubelet[3589]: E1105 15:07:35.910268 3589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de1e269927a1c9cc81921d8188753dc3341dc435c5985e2699fdd74009851eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.910481 kubelet[3589]: E1105 15:07:35.910340 3589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de1e269927a1c9cc81921d8188753dc3341dc435c5985e2699fdd74009851eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-72tws" Nov 5 15:07:35.910481 kubelet[3589]: E1105 15:07:35.910364 3589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de1e269927a1c9cc81921d8188753dc3341dc435c5985e2699fdd74009851eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-72tws" Nov 5 15:07:35.910590 kubelet[3589]: E1105 15:07:35.910445 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-72tws_kube-system(67043604-b63e-4022-91e3-c77c7a05a34a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-72tws_kube-system(67043604-b63e-4022-91e3-c77c7a05a34a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de1e269927a1c9cc81921d8188753dc3341dc435c5985e2699fdd74009851eea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-72tws" podUID="67043604-b63e-4022-91e3-c77c7a05a34a" Nov 5 15:07:35.914231 containerd[2056]: time="2025-11-05T15:07:35.914146734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-547ffb4db4-98mg5,Uid:9ae68c23-7222-4e65-9b08-63622fb7b33b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"206dd0f3d5c4eee574acf1d6c7d25d73b9fa04f3c9d76ebfcc784556e9dbebca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.914607 kubelet[3589]: E1105 15:07:35.914397 3589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"206dd0f3d5c4eee574acf1d6c7d25d73b9fa04f3c9d76ebfcc784556e9dbebca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.914607 kubelet[3589]: E1105 15:07:35.914442 3589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"206dd0f3d5c4eee574acf1d6c7d25d73b9fa04f3c9d76ebfcc784556e9dbebca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-547ffb4db4-98mg5" Nov 5 15:07:35.914607 kubelet[3589]: E1105 15:07:35.914467 3589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"206dd0f3d5c4eee574acf1d6c7d25d73b9fa04f3c9d76ebfcc784556e9dbebca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-547ffb4db4-98mg5" Nov 5 15:07:35.914962 kubelet[3589]: E1105 15:07:35.914506 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-547ffb4db4-98mg5_calico-system(9ae68c23-7222-4e65-9b08-63622fb7b33b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-547ffb4db4-98mg5_calico-system(9ae68c23-7222-4e65-9b08-63622fb7b33b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"206dd0f3d5c4eee574acf1d6c7d25d73b9fa04f3c9d76ebfcc784556e9dbebca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-547ffb4db4-98mg5" podUID="9ae68c23-7222-4e65-9b08-63622fb7b33b" Nov 5 15:07:35.933461 containerd[2056]: time="2025-11-05T15:07:35.933369666Z" level=error msg="Failed to destroy network for sandbox \"8d0679bfdb1eb81e63fbdfd1194a687d5593f44fd475a264a202f2d60654b7af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.936425 containerd[2056]: time="2025-11-05T15:07:35.936394113Z" level=error msg="Failed to destroy network for sandbox \"fddacbdc77723c21338ab4f5ce5b06440fbe38c164bda9b13941acfe41192d75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.937834 containerd[2056]: time="2025-11-05T15:07:35.937616686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c94bc65c5-b4vch,Uid:5f8444f4-0b9f-4af6-a67a-d71e5d4f1309,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0679bfdb1eb81e63fbdfd1194a687d5593f44fd475a264a202f2d60654b7af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.938115 kubelet[3589]: E1105 15:07:35.937815 3589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0679bfdb1eb81e63fbdfd1194a687d5593f44fd475a264a202f2d60654b7af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.938115 kubelet[3589]: E1105 15:07:35.937979 3589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0679bfdb1eb81e63fbdfd1194a687d5593f44fd475a264a202f2d60654b7af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" Nov 5 15:07:35.938115 kubelet[3589]: E1105 15:07:35.937994 3589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0679bfdb1eb81e63fbdfd1194a687d5593f44fd475a264a202f2d60654b7af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" Nov 5 15:07:35.939316 kubelet[3589]: E1105 15:07:35.938051 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c94bc65c5-b4vch_calico-apiserver(5f8444f4-0b9f-4af6-a67a-d71e5d4f1309)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c94bc65c5-b4vch_calico-apiserver(5f8444f4-0b9f-4af6-a67a-d71e5d4f1309)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d0679bfdb1eb81e63fbdfd1194a687d5593f44fd475a264a202f2d60654b7af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" podUID="5f8444f4-0b9f-4af6-a67a-d71e5d4f1309" Nov 5 15:07:35.942458 containerd[2056]: time="2025-11-05T15:07:35.942394094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65d654bb8-d6fmx,Uid:c1f198dc-261d-4ac5-8860-91734a6c009d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddacbdc77723c21338ab4f5ce5b06440fbe38c164bda9b13941acfe41192d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.942759 kubelet[3589]: E1105 15:07:35.942621 3589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddacbdc77723c21338ab4f5ce5b06440fbe38c164bda9b13941acfe41192d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.942759 kubelet[3589]: E1105 15:07:35.942662 3589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddacbdc77723c21338ab4f5ce5b06440fbe38c164bda9b13941acfe41192d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" Nov 5 15:07:35.942759 kubelet[3589]: E1105 15:07:35.942675 3589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddacbdc77723c21338ab4f5ce5b06440fbe38c164bda9b13941acfe41192d75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" Nov 5 15:07:35.942848 kubelet[3589]: E1105 15:07:35.942707 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65d654bb8-d6fmx_calico-apiserver(c1f198dc-261d-4ac5-8860-91734a6c009d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65d654bb8-d6fmx_calico-apiserver(c1f198dc-261d-4ac5-8860-91734a6c009d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fddacbdc77723c21338ab4f5ce5b06440fbe38c164bda9b13941acfe41192d75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" podUID="c1f198dc-261d-4ac5-8860-91734a6c009d" Nov 5 15:07:35.948237 containerd[2056]: time="2025-11-05T15:07:35.948175622Z" level=error msg="Failed to destroy network for sandbox \"00d314aa70af3037729dd3b2b9c59cfd6000271527ecf0799a03561468b03215\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.951453 containerd[2056]: time="2025-11-05T15:07:35.951410170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84577cbdbb-g8xvq,Uid:918f7e6e-ae2a-455d-8758-01b9af03afc5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d314aa70af3037729dd3b2b9c59cfd6000271527ecf0799a03561468b03215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.952449 kubelet[3589]: E1105 15:07:35.951586 3589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d314aa70af3037729dd3b2b9c59cfd6000271527ecf0799a03561468b03215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.952449 kubelet[3589]: E1105 15:07:35.951623 3589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d314aa70af3037729dd3b2b9c59cfd6000271527ecf0799a03561468b03215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" Nov 5 15:07:35.952449 kubelet[3589]: E1105 15:07:35.951638 3589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d314aa70af3037729dd3b2b9c59cfd6000271527ecf0799a03561468b03215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" Nov 5 15:07:35.952531 kubelet[3589]: E1105 15:07:35.951675 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84577cbdbb-g8xvq_calico-system(918f7e6e-ae2a-455d-8758-01b9af03afc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84577cbdbb-g8xvq_calico-system(918f7e6e-ae2a-455d-8758-01b9af03afc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00d314aa70af3037729dd3b2b9c59cfd6000271527ecf0799a03561468b03215\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" podUID="918f7e6e-ae2a-455d-8758-01b9af03afc5" Nov 5 15:07:35.957523 containerd[2056]: time="2025-11-05T15:07:35.957497401Z" level=error msg="Failed to destroy network for sandbox \"c6463954101c3b756828e1f39770d0850f7dbf87692569df1febbb9f56b90e1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.960555 containerd[2056]: time="2025-11-05T15:07:35.960519008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c94bc65c5-xbp7r,Uid:38f05b09-e539-4b0f-aa00-1af242dcf380,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6463954101c3b756828e1f39770d0850f7dbf87692569df1febbb9f56b90e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.960837 kubelet[3589]: E1105 15:07:35.960806 3589 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6463954101c3b756828e1f39770d0850f7dbf87692569df1febbb9f56b90e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:07:35.960890 kubelet[3589]: E1105 15:07:35.960845 3589 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6463954101c3b756828e1f39770d0850f7dbf87692569df1febbb9f56b90e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" Nov 5 15:07:35.960890 kubelet[3589]: E1105 15:07:35.960858 3589 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6463954101c3b756828e1f39770d0850f7dbf87692569df1febbb9f56b90e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" Nov 5 15:07:35.960968 kubelet[3589]: E1105 15:07:35.960891 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c94bc65c5-xbp7r_calico-apiserver(38f05b09-e539-4b0f-aa00-1af242dcf380)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c94bc65c5-xbp7r_calico-apiserver(38f05b09-e539-4b0f-aa00-1af242dcf380)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6463954101c3b756828e1f39770d0850f7dbf87692569df1febbb9f56b90e1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" podUID="38f05b09-e539-4b0f-aa00-1af242dcf380" Nov 5 15:07:36.428741 containerd[2056]: time="2025-11-05T15:07:36.428637749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 15:07:40.273954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532124765.mount: Deactivated successfully. Nov 5 15:07:40.878585 containerd[2056]: time="2025-11-05T15:07:40.878447357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:40.881117 containerd[2056]: time="2025-11-05T15:07:40.881066509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 5 15:07:40.884229 containerd[2056]: time="2025-11-05T15:07:40.884179825Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:40.888405 containerd[2056]: time="2025-11-05T15:07:40.888174007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:07:40.888574 containerd[2056]: time="2025-11-05T15:07:40.888548479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.458867243s" Nov 5 15:07:40.888626 containerd[2056]: time="2025-11-05T15:07:40.888577343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 5 15:07:40.904090 containerd[2056]: time="2025-11-05T15:07:40.904035708Z" level=info msg="CreateContainer within sandbox \"3790b4f496f0883ca10da141d52b935ce61b8363f69dab327517dede66be7e24\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 15:07:40.923987 containerd[2056]: time="2025-11-05T15:07:40.923747061Z" level=info msg="Container 20607295be6c34d0e78530a06ddff2fadef7a4d6779a246f9622052b0d6bb9b9: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:07:40.945853 containerd[2056]: time="2025-11-05T15:07:40.945780192Z" level=info msg="CreateContainer within sandbox \"3790b4f496f0883ca10da141d52b935ce61b8363f69dab327517dede66be7e24\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"20607295be6c34d0e78530a06ddff2fadef7a4d6779a246f9622052b0d6bb9b9\"" Nov 5 15:07:40.947079 containerd[2056]: time="2025-11-05T15:07:40.946803846Z" level=info msg="StartContainer for \"20607295be6c34d0e78530a06ddff2fadef7a4d6779a246f9622052b0d6bb9b9\"" Nov 5 15:07:40.948164 containerd[2056]: time="2025-11-05T15:07:40.948130787Z" level=info msg="connecting to shim 20607295be6c34d0e78530a06ddff2fadef7a4d6779a246f9622052b0d6bb9b9" address="unix:///run/containerd/s/1a92ca590d7deef81841d03094d13c598b64ac8ed3d6c993c8e9e28aaf1a68f1" protocol=ttrpc version=3 Nov 5 15:07:40.967513 systemd[1]: Started cri-containerd-20607295be6c34d0e78530a06ddff2fadef7a4d6779a246f9622052b0d6bb9b9.scope - libcontainer container 20607295be6c34d0e78530a06ddff2fadef7a4d6779a246f9622052b0d6bb9b9. Nov 5 15:07:41.036571 containerd[2056]: time="2025-11-05T15:07:41.036472690Z" level=info msg="StartContainer for \"20607295be6c34d0e78530a06ddff2fadef7a4d6779a246f9622052b0d6bb9b9\" returns successfully" Nov 5 15:07:41.446550 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 15:07:41.447295 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 15:07:41.547289 containerd[2056]: time="2025-11-05T15:07:41.547250673Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20607295be6c34d0e78530a06ddff2fadef7a4d6779a246f9622052b0d6bb9b9\" id:\"c4e13067a5b951a4154cc176deefe823f6c425122df714461688701489312234\" pid:4597 exit_status:1 exited_at:{seconds:1762355261 nanos:546104672}" Nov 5 15:07:41.567483 kubelet[3589]: I1105 15:07:41.567212 3589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sq28p" podStartSLOduration=1.7958142449999999 podStartE2EDuration="15.567192087s" podCreationTimestamp="2025-11-05 15:07:26 +0000 UTC" firstStartedPulling="2025-11-05 15:07:27.118589843 +0000 UTC m=+20.888537088" lastFinishedPulling="2025-11-05 15:07:40.889967685 +0000 UTC m=+34.659914930" observedRunningTime="2025-11-05 15:07:41.46616011 +0000 UTC m=+35.236107363" watchObservedRunningTime="2025-11-05 15:07:41.567192087 +0000 UTC m=+35.337139332" Nov 5 15:07:41.632435 kubelet[3589]: I1105 15:07:41.632388 3589 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ae68c23-7222-4e65-9b08-63622fb7b33b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9ae68c23-7222-4e65-9b08-63622fb7b33b" (UID: "9ae68c23-7222-4e65-9b08-63622fb7b33b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:07:41.633548 kubelet[3589]: I1105 15:07:41.633524 3589 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ae68c23-7222-4e65-9b08-63622fb7b33b-whisker-ca-bundle\") pod \"9ae68c23-7222-4e65-9b08-63622fb7b33b\" (UID: \"9ae68c23-7222-4e65-9b08-63622fb7b33b\") " Nov 5 15:07:41.633701 kubelet[3589]: I1105 15:07:41.633640 3589 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ae68c23-7222-4e65-9b08-63622fb7b33b-whisker-backend-key-pair\") pod \"9ae68c23-7222-4e65-9b08-63622fb7b33b\" (UID: \"9ae68c23-7222-4e65-9b08-63622fb7b33b\") " Nov 5 15:07:41.633816 kubelet[3589]: I1105 15:07:41.633667 3589 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwlpq\" (UniqueName: \"kubernetes.io/projected/9ae68c23-7222-4e65-9b08-63622fb7b33b-kube-api-access-pwlpq\") pod \"9ae68c23-7222-4e65-9b08-63622fb7b33b\" (UID: \"9ae68c23-7222-4e65-9b08-63622fb7b33b\") " Nov 5 15:07:41.633891 kubelet[3589]: I1105 15:07:41.633879 3589 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ae68c23-7222-4e65-9b08-63622fb7b33b-whisker-ca-bundle\") on node \"ci-4487.0.1-a-05c7a88322\" DevicePath \"\"" Nov 5 15:07:41.639301 systemd[1]: var-lib-kubelet-pods-9ae68c23\x2d7222\x2d4e65\x2d9b08\x2d63622fb7b33b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 15:07:41.640113 kubelet[3589]: I1105 15:07:41.640025 3589 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae68c23-7222-4e65-9b08-63622fb7b33b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9ae68c23-7222-4e65-9b08-63622fb7b33b" (UID: "9ae68c23-7222-4e65-9b08-63622fb7b33b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:07:41.642111 systemd[1]: var-lib-kubelet-pods-9ae68c23\x2d7222\x2d4e65\x2d9b08\x2d63622fb7b33b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpwlpq.mount: Deactivated successfully. Nov 5 15:07:41.643151 kubelet[3589]: I1105 15:07:41.642975 3589 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae68c23-7222-4e65-9b08-63622fb7b33b-kube-api-access-pwlpq" (OuterVolumeSpecName: "kube-api-access-pwlpq") pod "9ae68c23-7222-4e65-9b08-63622fb7b33b" (UID: "9ae68c23-7222-4e65-9b08-63622fb7b33b"). InnerVolumeSpecName "kube-api-access-pwlpq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:07:41.735139 kubelet[3589]: I1105 15:07:41.735107 3589 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9ae68c23-7222-4e65-9b08-63622fb7b33b-whisker-backend-key-pair\") on node \"ci-4487.0.1-a-05c7a88322\" DevicePath \"\"" Nov 5 15:07:41.735294 kubelet[3589]: I1105 15:07:41.735284 3589 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pwlpq\" (UniqueName: \"kubernetes.io/projected/9ae68c23-7222-4e65-9b08-63622fb7b33b-kube-api-access-pwlpq\") on node \"ci-4487.0.1-a-05c7a88322\" DevicePath \"\"" Nov 5 15:07:42.327484 systemd[1]: Removed slice kubepods-besteffort-pod9ae68c23_7222_4e65_9b08_63622fb7b33b.slice - libcontainer container kubepods-besteffort-pod9ae68c23_7222_4e65_9b08_63622fb7b33b.slice. Nov 5 15:07:42.524963 systemd[1]: Created slice kubepods-besteffort-pod7b333bcc_6bcd_4c30_b1c0_19548616455c.slice - libcontainer container kubepods-besteffort-pod7b333bcc_6bcd_4c30_b1c0_19548616455c.slice. Nov 5 15:07:42.539249 kubelet[3589]: I1105 15:07:42.538633 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv6kp\" (UniqueName: \"kubernetes.io/projected/7b333bcc-6bcd-4c30-b1c0-19548616455c-kube-api-access-bv6kp\") pod \"whisker-66c5db86d6-4vr9m\" (UID: \"7b333bcc-6bcd-4c30-b1c0-19548616455c\") " pod="calico-system/whisker-66c5db86d6-4vr9m" Nov 5 15:07:42.539249 kubelet[3589]: I1105 15:07:42.538984 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7b333bcc-6bcd-4c30-b1c0-19548616455c-whisker-backend-key-pair\") pod \"whisker-66c5db86d6-4vr9m\" (UID: \"7b333bcc-6bcd-4c30-b1c0-19548616455c\") " pod="calico-system/whisker-66c5db86d6-4vr9m" Nov 5 15:07:42.539249 kubelet[3589]: I1105 15:07:42.539009 3589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b333bcc-6bcd-4c30-b1c0-19548616455c-whisker-ca-bundle\") pod \"whisker-66c5db86d6-4vr9m\" (UID: \"7b333bcc-6bcd-4c30-b1c0-19548616455c\") " pod="calico-system/whisker-66c5db86d6-4vr9m" Nov 5 15:07:42.547182 containerd[2056]: time="2025-11-05T15:07:42.547054466Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20607295be6c34d0e78530a06ddff2fadef7a4d6779a246f9622052b0d6bb9b9\" id:\"91fff41a51a51d98304bd98f2cf72ae868d403ec6696742caf056f559d27e8a0\" pid:4642 exit_status:1 exited_at:{seconds:1762355262 nanos:546745483}" Nov 5 15:07:42.953623 containerd[2056]: time="2025-11-05T15:07:42.953570538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66c5db86d6-4vr9m,Uid:7b333bcc-6bcd-4c30-b1c0-19548616455c,Namespace:calico-system,Attempt:0,}" Nov 5 15:07:43.494050 systemd-networkd[1661]: caliccb7eda21df: Link UP Nov 5 15:07:43.494596 systemd-networkd[1661]: caliccb7eda21df: Gained carrier Nov 5 15:07:43.518100 containerd[2056]: 2025-11-05 15:07:43.301 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-eth0 whisker-66c5db86d6- calico-system 7b333bcc-6bcd-4c30-b1c0-19548616455c 934 0 2025-11-05 15:07:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66c5db86d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4487.0.1-a-05c7a88322 whisker-66c5db86d6-4vr9m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliccb7eda21df [] [] }} ContainerID="8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" Namespace="calico-system" Pod="whisker-66c5db86d6-4vr9m" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-" Nov 5 15:07:43.518100 containerd[2056]: 2025-11-05 15:07:43.302 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" Namespace="calico-system" Pod="whisker-66c5db86d6-4vr9m" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-eth0" Nov 5 15:07:43.518100 containerd[2056]: 2025-11-05 15:07:43.323 [INFO][4810] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" HandleID="k8s-pod-network.8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" Workload="ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-eth0" Nov 5 15:07:43.518525 containerd[2056]: 2025-11-05 15:07:43.323 [INFO][4810] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" HandleID="k8s-pod-network.8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" Workload="ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-a-05c7a88322", "pod":"whisker-66c5db86d6-4vr9m", "timestamp":"2025-11-05 15:07:43.323703178 +0000 UTC"}, Hostname:"ci-4487.0.1-a-05c7a88322", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:07:43.518525 containerd[2056]: 2025-11-05 15:07:43.323 [INFO][4810] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:07:43.518525 containerd[2056]: 2025-11-05 15:07:43.323 [INFO][4810] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:07:43.518525 containerd[2056]: 2025-11-05 15:07:43.323 [INFO][4810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-05c7a88322' Nov 5 15:07:43.518525 containerd[2056]: 2025-11-05 15:07:43.329 [INFO][4810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:43.518525 containerd[2056]: 2025-11-05 15:07:43.332 [INFO][4810] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:43.518525 containerd[2056]: 2025-11-05 15:07:43.335 [INFO][4810] ipam/ipam.go 511: Trying affinity for 192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:43.518525 containerd[2056]: 2025-11-05 15:07:43.336 [INFO][4810] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:43.518525 containerd[2056]: 2025-11-05 15:07:43.337 [INFO][4810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:43.518915 containerd[2056]: 2025-11-05 15:07:43.337 [INFO][4810] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.128/26 handle="k8s-pod-network.8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:43.518915 containerd[2056]: 2025-11-05 15:07:43.338 [INFO][4810] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136 Nov 5 15:07:43.518915 containerd[2056]: 2025-11-05 15:07:43.343 [INFO][4810] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.128/26 handle="k8s-pod-network.8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:43.518915 containerd[2056]: 2025-11-05 15:07:43.364 [INFO][4810] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.129/26] block=192.168.50.128/26 handle="k8s-pod-network.8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:43.518915 containerd[2056]: 2025-11-05 15:07:43.364 [INFO][4810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.129/26] handle="k8s-pod-network.8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:43.518915 containerd[2056]: 2025-11-05 15:07:43.365 [INFO][4810] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:07:43.518915 containerd[2056]: 2025-11-05 15:07:43.365 [INFO][4810] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.129/26] IPv6=[] ContainerID="8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" HandleID="k8s-pod-network.8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" Workload="ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-eth0" Nov 5 15:07:43.519023 containerd[2056]: 2025-11-05 15:07:43.367 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" Namespace="calico-system" Pod="whisker-66c5db86d6-4vr9m" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-eth0", GenerateName:"whisker-66c5db86d6-", Namespace:"calico-system", SelfLink:"", UID:"7b333bcc-6bcd-4c30-b1c0-19548616455c", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66c5db86d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"", Pod:"whisker-66c5db86d6-4vr9m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliccb7eda21df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:43.519023 containerd[2056]: 2025-11-05 15:07:43.367 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.129/32] ContainerID="8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" Namespace="calico-system" Pod="whisker-66c5db86d6-4vr9m" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-eth0" Nov 5 15:07:43.519076 containerd[2056]: 2025-11-05 15:07:43.367 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccb7eda21df ContainerID="8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" Namespace="calico-system" Pod="whisker-66c5db86d6-4vr9m" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-eth0" Nov 5 15:07:43.519076 containerd[2056]: 2025-11-05 15:07:43.496 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" Namespace="calico-system" Pod="whisker-66c5db86d6-4vr9m" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-eth0" Nov 5 15:07:43.519107 containerd[2056]: 2025-11-05 15:07:43.496 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" Namespace="calico-system" Pod="whisker-66c5db86d6-4vr9m" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-eth0", GenerateName:"whisker-66c5db86d6-", Namespace:"calico-system", SelfLink:"", UID:"7b333bcc-6bcd-4c30-b1c0-19548616455c", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66c5db86d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136", Pod:"whisker-66c5db86d6-4vr9m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliccb7eda21df", MAC:"3e:40:44:da:f7:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:43.519140 containerd[2056]: 2025-11-05 15:07:43.514 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" Namespace="calico-system" Pod="whisker-66c5db86d6-4vr9m" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-whisker--66c5db86d6--4vr9m-eth0" Nov 5 15:07:43.563498 systemd-networkd[1661]: vxlan.calico: Link UP Nov 5 15:07:43.563504 systemd-networkd[1661]: vxlan.calico: Gained carrier Nov 5 15:07:43.569689 containerd[2056]: time="2025-11-05T15:07:43.569304739Z" level=info msg="connecting to shim 8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136" address="unix:///run/containerd/s/240732831071d285ad46329e606df83a875a345c414130d471098587239e065b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:07:43.603568 systemd[1]: Started cri-containerd-8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136.scope - libcontainer container 8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136. Nov 5 15:07:43.655176 containerd[2056]: time="2025-11-05T15:07:43.655138880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66c5db86d6-4vr9m,Uid:7b333bcc-6bcd-4c30-b1c0-19548616455c,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fa470d23cd4d2aa588afbc7c3c2ae75caf1337bad790011e1728d3628fb8136\"" Nov 5 15:07:43.657685 containerd[2056]: time="2025-11-05T15:07:43.656970516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:07:43.911365 containerd[2056]: time="2025-11-05T15:07:43.911192620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:07:43.914441 containerd[2056]: time="2025-11-05T15:07:43.914405024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:07:43.914653 containerd[2056]: time="2025-11-05T15:07:43.914488570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:07:43.915036 kubelet[3589]: E1105 15:07:43.914774 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:07:43.915036 kubelet[3589]: E1105 15:07:43.914836 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:07:43.916751 kubelet[3589]: E1105 15:07:43.916714 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66c5db86d6-4vr9m_calico-system(7b333bcc-6bcd-4c30-b1c0-19548616455c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:07:43.917721 containerd[2056]: time="2025-11-05T15:07:43.917675638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:07:44.244761 containerd[2056]: time="2025-11-05T15:07:44.244716991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:07:44.247967 containerd[2056]: time="2025-11-05T15:07:44.247929133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:07:44.248031 containerd[2056]: time="2025-11-05T15:07:44.248017103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:07:44.248270 kubelet[3589]: E1105 15:07:44.248201 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:07:44.248270 kubelet[3589]: E1105 15:07:44.248255 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:07:44.248530 kubelet[3589]: E1105 15:07:44.248511 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66c5db86d6-4vr9m_calico-system(7b333bcc-6bcd-4c30-b1c0-19548616455c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:07:44.248686 kubelet[3589]: E1105 15:07:44.248612 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c5db86d6-4vr9m" podUID="7b333bcc-6bcd-4c30-b1c0-19548616455c" Nov 5 15:07:44.324034 kubelet[3589]: I1105 15:07:44.323996 3589 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae68c23-7222-4e65-9b08-63622fb7b33b" path="/var/lib/kubelet/pods/9ae68c23-7222-4e65-9b08-63622fb7b33b/volumes" Nov 5 15:07:44.448634 kubelet[3589]: E1105 15:07:44.448590 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c5db86d6-4vr9m" podUID="7b333bcc-6bcd-4c30-b1c0-19548616455c" Nov 5 15:07:44.888559 systemd-networkd[1661]: vxlan.calico: Gained IPv6LL Nov 5 15:07:45.017502 systemd-networkd[1661]: caliccb7eda21df: Gained IPv6LL Nov 5 15:07:45.450851 kubelet[3589]: E1105 15:07:45.450802 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c5db86d6-4vr9m" podUID="7b333bcc-6bcd-4c30-b1c0-19548616455c" Nov 5 15:07:47.327442 containerd[2056]: time="2025-11-05T15:07:47.327401551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65d654bb8-d6fmx,Uid:c1f198dc-261d-4ac5-8860-91734a6c009d,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:07:47.411367 systemd-networkd[1661]: cali4754b156501: Link UP Nov 5 15:07:47.412411 systemd-networkd[1661]: cali4754b156501: Gained carrier Nov 5 15:07:47.428308 containerd[2056]: 2025-11-05 15:07:47.358 [INFO][4935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-eth0 calico-apiserver-65d654bb8- calico-apiserver c1f198dc-261d-4ac5-8860-91734a6c009d 866 0 2025-11-05 15:07:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65d654bb8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.1-a-05c7a88322 calico-apiserver-65d654bb8-d6fmx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4754b156501 [] [] }} ContainerID="efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" Namespace="calico-apiserver" Pod="calico-apiserver-65d654bb8-d6fmx" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-" Nov 5 15:07:47.428308 containerd[2056]: 2025-11-05 15:07:47.358 [INFO][4935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" Namespace="calico-apiserver" Pod="calico-apiserver-65d654bb8-d6fmx" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-eth0" Nov 5 15:07:47.428308 containerd[2056]: 2025-11-05 15:07:47.377 [INFO][4946] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" HandleID="k8s-pod-network.efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" Workload="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-eth0" Nov 5 15:07:47.428610 containerd[2056]: 2025-11-05 15:07:47.377 [INFO][4946] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" HandleID="k8s-pod-network.efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" Workload="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.1-a-05c7a88322", "pod":"calico-apiserver-65d654bb8-d6fmx", "timestamp":"2025-11-05 15:07:47.377454874 +0000 UTC"}, Hostname:"ci-4487.0.1-a-05c7a88322", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:07:47.428610 containerd[2056]: 2025-11-05 15:07:47.377 [INFO][4946] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:07:47.428610 containerd[2056]: 2025-11-05 15:07:47.377 [INFO][4946] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:07:47.428610 containerd[2056]: 2025-11-05 15:07:47.377 [INFO][4946] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-05c7a88322' Nov 5 15:07:47.428610 containerd[2056]: 2025-11-05 15:07:47.382 [INFO][4946] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:47.428610 containerd[2056]: 2025-11-05 15:07:47.386 [INFO][4946] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:47.428610 containerd[2056]: 2025-11-05 15:07:47.389 [INFO][4946] ipam/ipam.go 511: Trying affinity for 192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:47.428610 containerd[2056]: 2025-11-05 15:07:47.390 [INFO][4946] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:47.428610 containerd[2056]: 2025-11-05 15:07:47.391 [INFO][4946] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:47.428875 containerd[2056]: 2025-11-05 15:07:47.391 [INFO][4946] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.128/26 handle="k8s-pod-network.efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:47.428875 containerd[2056]: 2025-11-05 15:07:47.393 [INFO][4946] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e Nov 5 15:07:47.428875 containerd[2056]: 2025-11-05 15:07:47.399 [INFO][4946] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.128/26 handle="k8s-pod-network.efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:47.428875 containerd[2056]: 2025-11-05 15:07:47.405 [INFO][4946] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.130/26] block=192.168.50.128/26 handle="k8s-pod-network.efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:47.428875 containerd[2056]: 2025-11-05 15:07:47.405 [INFO][4946] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.130/26] handle="k8s-pod-network.efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:47.428875 containerd[2056]: 2025-11-05 15:07:47.405 [INFO][4946] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:07:47.428875 containerd[2056]: 2025-11-05 15:07:47.405 [INFO][4946] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.130/26] IPv6=[] ContainerID="efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" HandleID="k8s-pod-network.efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" Workload="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-eth0" Nov 5 15:07:47.428989 containerd[2056]: 2025-11-05 15:07:47.407 [INFO][4935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" Namespace="calico-apiserver" Pod="calico-apiserver-65d654bb8-d6fmx" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-eth0", GenerateName:"calico-apiserver-65d654bb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1f198dc-261d-4ac5-8860-91734a6c009d", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65d654bb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"", Pod:"calico-apiserver-65d654bb8-d6fmx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4754b156501", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:47.429046 containerd[2056]: 2025-11-05 15:07:47.407 [INFO][4935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.130/32] ContainerID="efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" Namespace="calico-apiserver" Pod="calico-apiserver-65d654bb8-d6fmx" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-eth0" Nov 5 15:07:47.429046 containerd[2056]: 2025-11-05 15:07:47.407 [INFO][4935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4754b156501 ContainerID="efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" Namespace="calico-apiserver" Pod="calico-apiserver-65d654bb8-d6fmx" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-eth0" Nov 5 15:07:47.429046 containerd[2056]: 2025-11-05 15:07:47.411 [INFO][4935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" Namespace="calico-apiserver" Pod="calico-apiserver-65d654bb8-d6fmx" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-eth0" Nov 5 15:07:47.429111 containerd[2056]: 2025-11-05 15:07:47.412 [INFO][4935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" Namespace="calico-apiserver" Pod="calico-apiserver-65d654bb8-d6fmx" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-eth0", GenerateName:"calico-apiserver-65d654bb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1f198dc-261d-4ac5-8860-91734a6c009d", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65d654bb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e", Pod:"calico-apiserver-65d654bb8-d6fmx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4754b156501", MAC:"82:7f:52:b5:77:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:47.429148 containerd[2056]: 2025-11-05 15:07:47.426 [INFO][4935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" Namespace="calico-apiserver" Pod="calico-apiserver-65d654bb8-d6fmx" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--65d654bb8--d6fmx-eth0" Nov 5 15:07:47.476989 containerd[2056]: time="2025-11-05T15:07:47.476943026Z" level=info msg="connecting to shim efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e" address="unix:///run/containerd/s/031fc55d926a7e1da66c3e6daf07586e1628841858a789ec60aa2f923d9d6d7a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:07:47.499510 systemd[1]: Started cri-containerd-efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e.scope - libcontainer container efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e. Nov 5 15:07:47.532328 containerd[2056]: time="2025-11-05T15:07:47.532287992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65d654bb8-d6fmx,Uid:c1f198dc-261d-4ac5-8860-91734a6c009d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"efa8e6ec2145fab63df30de388527dadd3a3506ff4506b613c9a13fe1da08c4e\"" Nov 5 15:07:47.535305 containerd[2056]: time="2025-11-05T15:07:47.535276337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:07:47.822103 containerd[2056]: time="2025-11-05T15:07:47.822049507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:07:47.826154 containerd[2056]: time="2025-11-05T15:07:47.825044836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:07:47.826154 containerd[2056]: time="2025-11-05T15:07:47.825126214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:07:47.826295 kubelet[3589]: E1105 15:07:47.825297 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:07:47.826295 kubelet[3589]: E1105 15:07:47.825378 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:07:47.826295 kubelet[3589]: E1105 15:07:47.825465 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65d654bb8-d6fmx_calico-apiserver(c1f198dc-261d-4ac5-8860-91734a6c009d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:07:47.826295 kubelet[3589]: E1105 15:07:47.825491 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" podUID="c1f198dc-261d-4ac5-8860-91734a6c009d" Nov 5 15:07:48.328379 containerd[2056]: time="2025-11-05T15:07:48.328272363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lg4lc,Uid:460d4776-c8b3-4dec-911d-f1ebdf0cfa3b,Namespace:calico-system,Attempt:0,}" Nov 5 15:07:48.419567 systemd-networkd[1661]: calida03abcf4ca: Link UP Nov 5 15:07:48.420338 systemd-networkd[1661]: calida03abcf4ca: Gained carrier Nov 5 15:07:48.437337 containerd[2056]: 2025-11-05 15:07:48.357 [INFO][5006] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-eth0 goldmane-7c778bb748- calico-system 460d4776-c8b3-4dec-911d-f1ebdf0cfa3b 862 0 2025-11-05 15:07:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4487.0.1-a-05c7a88322 goldmane-7c778bb748-lg4lc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calida03abcf4ca [] [] }} ContainerID="4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" Namespace="calico-system" Pod="goldmane-7c778bb748-lg4lc" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-" Nov 5 15:07:48.437337 containerd[2056]: 2025-11-05 15:07:48.357 [INFO][5006] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" Namespace="calico-system" Pod="goldmane-7c778bb748-lg4lc" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-eth0" Nov 5 15:07:48.437337 containerd[2056]: 2025-11-05 15:07:48.381 [INFO][5021] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" HandleID="k8s-pod-network.4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" Workload="ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-eth0" Nov 5 15:07:48.437536 containerd[2056]: 2025-11-05 15:07:48.381 [INFO][5021] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" HandleID="k8s-pod-network.4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" Workload="ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-a-05c7a88322", "pod":"goldmane-7c778bb748-lg4lc", "timestamp":"2025-11-05 15:07:48.381065521 +0000 UTC"}, Hostname:"ci-4487.0.1-a-05c7a88322", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:07:48.437536 containerd[2056]: 2025-11-05 15:07:48.381 [INFO][5021] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:07:48.437536 containerd[2056]: 2025-11-05 15:07:48.381 [INFO][5021] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:07:48.437536 containerd[2056]: 2025-11-05 15:07:48.381 [INFO][5021] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-05c7a88322' Nov 5 15:07:48.437536 containerd[2056]: 2025-11-05 15:07:48.386 [INFO][5021] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:48.437536 containerd[2056]: 2025-11-05 15:07:48.389 [INFO][5021] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:48.437536 containerd[2056]: 2025-11-05 15:07:48.392 [INFO][5021] ipam/ipam.go 511: Trying affinity for 192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:48.437536 containerd[2056]: 2025-11-05 15:07:48.394 [INFO][5021] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:48.437536 containerd[2056]: 2025-11-05 15:07:48.395 [INFO][5021] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:48.437688 containerd[2056]: 2025-11-05 15:07:48.395 [INFO][5021] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.128/26 handle="k8s-pod-network.4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:48.437688 containerd[2056]: 2025-11-05 15:07:48.396 [INFO][5021] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b Nov 5 15:07:48.437688 containerd[2056]: 2025-11-05 15:07:48.407 [INFO][5021] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.128/26 handle="k8s-pod-network.4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:48.437688 containerd[2056]: 2025-11-05 15:07:48.412 [INFO][5021] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.131/26] block=192.168.50.128/26 handle="k8s-pod-network.4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:48.437688 containerd[2056]: 2025-11-05 15:07:48.412 [INFO][5021] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.131/26] handle="k8s-pod-network.4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:48.437688 containerd[2056]: 2025-11-05 15:07:48.412 [INFO][5021] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:07:48.437688 containerd[2056]: 2025-11-05 15:07:48.412 [INFO][5021] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.131/26] IPv6=[] ContainerID="4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" HandleID="k8s-pod-network.4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" Workload="ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-eth0" Nov 5 15:07:48.438211 containerd[2056]: 2025-11-05 15:07:48.415 [INFO][5006] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" Namespace="calico-system" Pod="goldmane-7c778bb748-lg4lc" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"460d4776-c8b3-4dec-911d-f1ebdf0cfa3b", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"", Pod:"goldmane-7c778bb748-lg4lc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calida03abcf4ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:48.438389 containerd[2056]: 2025-11-05 15:07:48.415 [INFO][5006] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.131/32] ContainerID="4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" Namespace="calico-system" Pod="goldmane-7c778bb748-lg4lc" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-eth0" Nov 5 15:07:48.438389 containerd[2056]: 2025-11-05 15:07:48.415 [INFO][5006] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida03abcf4ca ContainerID="4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" Namespace="calico-system" Pod="goldmane-7c778bb748-lg4lc" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-eth0" Nov 5 15:07:48.438389 containerd[2056]: 2025-11-05 15:07:48.420 [INFO][5006] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" Namespace="calico-system" Pod="goldmane-7c778bb748-lg4lc" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-eth0" Nov 5 15:07:48.438595 containerd[2056]: 2025-11-05 15:07:48.420 [INFO][5006] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" Namespace="calico-system" Pod="goldmane-7c778bb748-lg4lc" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"460d4776-c8b3-4dec-911d-f1ebdf0cfa3b", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b", Pod:"goldmane-7c778bb748-lg4lc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calida03abcf4ca", MAC:"22:77:da:fc:c0:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:48.438642 containerd[2056]: 2025-11-05 15:07:48.434 [INFO][5006] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" Namespace="calico-system" Pod="goldmane-7c778bb748-lg4lc" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-goldmane--7c778bb748--lg4lc-eth0" Nov 5 15:07:48.455554 kubelet[3589]: E1105 15:07:48.455334 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" podUID="c1f198dc-261d-4ac5-8860-91734a6c009d" Nov 5 15:07:48.487573 containerd[2056]: time="2025-11-05T15:07:48.487531113Z" level=info msg="connecting to shim 4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b" address="unix:///run/containerd/s/bbd81189cf938e99ec5cab09fbb26b794a49c19119b7691c1c140aae7065105e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:07:48.510567 systemd[1]: Started cri-containerd-4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b.scope - libcontainer container 4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b. Nov 5 15:07:48.703311 containerd[2056]: time="2025-11-05T15:07:48.703087978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-lg4lc,Uid:460d4776-c8b3-4dec-911d-f1ebdf0cfa3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ef8a0bca48be26e37666748ecb6105d3af265711ecb098c5c09e1e3dfd6dd8b\"" Nov 5 15:07:48.705462 containerd[2056]: time="2025-11-05T15:07:48.705334227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:07:49.072818 containerd[2056]: time="2025-11-05T15:07:49.072769098Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:07:49.075868 containerd[2056]: time="2025-11-05T15:07:49.075831189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:07:49.075957 containerd[2056]: time="2025-11-05T15:07:49.075915023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:07:49.076111 kubelet[3589]: E1105 15:07:49.076062 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:07:49.076450 kubelet[3589]: E1105 15:07:49.076120 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:07:49.076450 kubelet[3589]: E1105 15:07:49.076185 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lg4lc_calico-system(460d4776-c8b3-4dec-911d-f1ebdf0cfa3b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:07:49.076450 kubelet[3589]: E1105 15:07:49.076210 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lg4lc" podUID="460d4776-c8b3-4dec-911d-f1ebdf0cfa3b" Nov 5 15:07:49.240674 systemd-networkd[1661]: cali4754b156501: Gained IPv6LL Nov 5 15:07:49.329056 containerd[2056]: time="2025-11-05T15:07:49.328954073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-72tws,Uid:67043604-b63e-4022-91e3-c77c7a05a34a,Namespace:kube-system,Attempt:0,}" Nov 5 15:07:49.333582 containerd[2056]: time="2025-11-05T15:07:49.333552253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nr498,Uid:5175072d-c97d-4e97-bbe9-4eb6c98f1e6a,Namespace:calico-system,Attempt:0,}" Nov 5 15:07:49.448997 systemd-networkd[1661]: calie0a5fdb6d60: Link UP Nov 5 15:07:49.449511 systemd-networkd[1661]: calie0a5fdb6d60: Gained carrier Nov 5 15:07:49.466740 kubelet[3589]: E1105 15:07:49.466705 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lg4lc" podUID="460d4776-c8b3-4dec-911d-f1ebdf0cfa3b" Nov 5 15:07:49.467493 kubelet[3589]: E1105 15:07:49.467466 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" podUID="c1f198dc-261d-4ac5-8860-91734a6c009d" Nov 5 15:07:49.472893 containerd[2056]: 2025-11-05 15:07:49.373 [INFO][5090] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-eth0 coredns-66bc5c9577- kube-system 67043604-b63e-4022-91e3-c77c7a05a34a 865 0 2025-11-05 15:07:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.1-a-05c7a88322 coredns-66bc5c9577-72tws eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0a5fdb6d60 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" Namespace="kube-system" Pod="coredns-66bc5c9577-72tws" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-" Nov 5 15:07:49.472893 containerd[2056]: 2025-11-05 15:07:49.373 [INFO][5090] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" Namespace="kube-system" Pod="coredns-66bc5c9577-72tws" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-eth0" Nov 5 15:07:49.472893 containerd[2056]: 2025-11-05 15:07:49.404 [INFO][5112] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" HandleID="k8s-pod-network.d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" Workload="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-eth0" Nov 5 15:07:49.473043 containerd[2056]: 2025-11-05 15:07:49.404 [INFO][5112] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" HandleID="k8s-pod-network.d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" Workload="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000330050), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.1-a-05c7a88322", "pod":"coredns-66bc5c9577-72tws", "timestamp":"2025-11-05 15:07:49.40443595 +0000 UTC"}, Hostname:"ci-4487.0.1-a-05c7a88322", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:07:49.473043 containerd[2056]: 2025-11-05 15:07:49.404 [INFO][5112] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:07:49.473043 containerd[2056]: 2025-11-05 15:07:49.404 [INFO][5112] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:07:49.473043 containerd[2056]: 2025-11-05 15:07:49.404 [INFO][5112] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-05c7a88322' Nov 5 15:07:49.473043 containerd[2056]: 2025-11-05 15:07:49.412 [INFO][5112] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.473043 containerd[2056]: 2025-11-05 15:07:49.417 [INFO][5112] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.473043 containerd[2056]: 2025-11-05 15:07:49.421 [INFO][5112] ipam/ipam.go 511: Trying affinity for 192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.473043 containerd[2056]: 2025-11-05 15:07:49.422 [INFO][5112] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.473043 containerd[2056]: 2025-11-05 15:07:49.424 [INFO][5112] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.473301 containerd[2056]: 2025-11-05 15:07:49.424 [INFO][5112] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.128/26 handle="k8s-pod-network.d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.473301 containerd[2056]: 2025-11-05 15:07:49.425 [INFO][5112] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481 Nov 5 15:07:49.473301 containerd[2056]: 2025-11-05 15:07:49.429 [INFO][5112] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.128/26 handle="k8s-pod-network.d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.473301 containerd[2056]: 2025-11-05 15:07:49.439 [INFO][5112] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.132/26] block=192.168.50.128/26 handle="k8s-pod-network.d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.473301 containerd[2056]: 2025-11-05 15:07:49.439 [INFO][5112] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.132/26] handle="k8s-pod-network.d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.473301 containerd[2056]: 2025-11-05 15:07:49.439 [INFO][5112] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:07:49.473301 containerd[2056]: 2025-11-05 15:07:49.439 [INFO][5112] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.132/26] IPv6=[] ContainerID="d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" HandleID="k8s-pod-network.d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" Workload="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-eth0" Nov 5 15:07:49.473414 containerd[2056]: 2025-11-05 15:07:49.443 [INFO][5090] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" Namespace="kube-system" Pod="coredns-66bc5c9577-72tws" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"67043604-b63e-4022-91e3-c77c7a05a34a", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"", Pod:"coredns-66bc5c9577-72tws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0a5fdb6d60", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:49.473414 containerd[2056]: 2025-11-05 15:07:49.443 [INFO][5090] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.132/32] ContainerID="d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" Namespace="kube-system" Pod="coredns-66bc5c9577-72tws" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-eth0" Nov 5 15:07:49.473414 containerd[2056]: 2025-11-05 15:07:49.443 [INFO][5090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0a5fdb6d60 ContainerID="d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" Namespace="kube-system" Pod="coredns-66bc5c9577-72tws" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-eth0" Nov 5 15:07:49.473414 containerd[2056]: 2025-11-05 15:07:49.449 [INFO][5090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" Namespace="kube-system" Pod="coredns-66bc5c9577-72tws" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-eth0" Nov 5 15:07:49.473414 containerd[2056]: 2025-11-05 15:07:49.450 [INFO][5090] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" Namespace="kube-system" Pod="coredns-66bc5c9577-72tws" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"67043604-b63e-4022-91e3-c77c7a05a34a", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481", Pod:"coredns-66bc5c9577-72tws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0a5fdb6d60", MAC:"2e:69:2d:ed:1b:53", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:49.473549 containerd[2056]: 2025-11-05 15:07:49.470 [INFO][5090] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" Namespace="kube-system" Pod="coredns-66bc5c9577-72tws" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--72tws-eth0" Nov 5 15:07:49.523950 containerd[2056]: time="2025-11-05T15:07:49.523904649Z" level=info msg="connecting to shim d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481" address="unix:///run/containerd/s/6249ef89167b4b5a2f7112de90564a81ac59cb4fe4c98be7a76c82da2fbfd9a9" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:07:49.558496 systemd[1]: Started cri-containerd-d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481.scope - libcontainer container d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481. Nov 5 15:07:49.588925 systemd-networkd[1661]: cali039104339af: Link UP Nov 5 15:07:49.590178 systemd-networkd[1661]: cali039104339af: Gained carrier Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.378 [INFO][5098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-eth0 csi-node-driver- calico-system 5175072d-c97d-4e97-bbe9-4eb6c98f1e6a 751 0 2025-11-05 15:07:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4487.0.1-a-05c7a88322 csi-node-driver-nr498 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali039104339af [] [] }} ContainerID="d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" Namespace="calico-system" Pod="csi-node-driver-nr498" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-" Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.378 [INFO][5098] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" Namespace="calico-system" Pod="csi-node-driver-nr498" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-eth0" Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.415 [INFO][5118] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" HandleID="k8s-pod-network.d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" Workload="ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-eth0" Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.415 [INFO][5118] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" HandleID="k8s-pod-network.d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" Workload="ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-a-05c7a88322", "pod":"csi-node-driver-nr498", "timestamp":"2025-11-05 15:07:49.415011836 +0000 UTC"}, Hostname:"ci-4487.0.1-a-05c7a88322", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.415 [INFO][5118] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.439 [INFO][5118] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.439 [INFO][5118] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-05c7a88322' Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.515 [INFO][5118] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.526 [INFO][5118] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.539 [INFO][5118] ipam/ipam.go 511: Trying affinity for 192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.545 [INFO][5118] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.549 [INFO][5118] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.549 [INFO][5118] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.128/26 handle="k8s-pod-network.d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.551 [INFO][5118] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505 Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.570 [INFO][5118] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.128/26 handle="k8s-pod-network.d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.581 [INFO][5118] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.133/26] block=192.168.50.128/26 handle="k8s-pod-network.d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.581 [INFO][5118] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.133/26] handle="k8s-pod-network.d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.581 [INFO][5118] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:07:49.609525 containerd[2056]: 2025-11-05 15:07:49.581 [INFO][5118] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.133/26] IPv6=[] ContainerID="d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" HandleID="k8s-pod-network.d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" Workload="ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-eth0" Nov 5 15:07:49.611026 containerd[2056]: 2025-11-05 15:07:49.584 [INFO][5098] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" Namespace="calico-system" Pod="csi-node-driver-nr498" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5175072d-c97d-4e97-bbe9-4eb6c98f1e6a", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"", Pod:"csi-node-driver-nr498", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali039104339af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:49.611026 containerd[2056]: 2025-11-05 15:07:49.584 [INFO][5098] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.133/32] ContainerID="d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" Namespace="calico-system" Pod="csi-node-driver-nr498" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-eth0" Nov 5 15:07:49.611026 containerd[2056]: 2025-11-05 15:07:49.584 [INFO][5098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali039104339af ContainerID="d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" Namespace="calico-system" Pod="csi-node-driver-nr498" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-eth0" Nov 5 15:07:49.611026 containerd[2056]: 2025-11-05 15:07:49.589 [INFO][5098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" Namespace="calico-system" Pod="csi-node-driver-nr498" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-eth0" Nov 5 15:07:49.611026 containerd[2056]: 2025-11-05 15:07:49.590 [INFO][5098] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" Namespace="calico-system" Pod="csi-node-driver-nr498" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5175072d-c97d-4e97-bbe9-4eb6c98f1e6a", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505", Pod:"csi-node-driver-nr498", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali039104339af", MAC:"de:45:84:20:66:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:49.611026 containerd[2056]: 2025-11-05 15:07:49.606 [INFO][5098] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" Namespace="calico-system" Pod="csi-node-driver-nr498" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-csi--node--driver--nr498-eth0" Nov 5 15:07:49.639906 containerd[2056]: time="2025-11-05T15:07:49.639865768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-72tws,Uid:67043604-b63e-4022-91e3-c77c7a05a34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481\"" Nov 5 15:07:49.651896 containerd[2056]: time="2025-11-05T15:07:49.651812972Z" level=info msg="CreateContainer within sandbox \"d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:07:49.657700 containerd[2056]: time="2025-11-05T15:07:49.657658508Z" level=info msg="connecting to shim d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505" address="unix:///run/containerd/s/0b4d96b89d053ca9fe0b68f7b996b269ec93f06003c32402c3c3fcb9e932c50a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:07:49.678045 containerd[2056]: time="2025-11-05T15:07:49.678017135Z" level=info msg="Container 461b61bc895ac6cfd3398081e51d82a7946849ca0ea58dbefbfe926a923dd0b4: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:07:49.681535 systemd[1]: Started cri-containerd-d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505.scope - libcontainer container d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505. Nov 5 15:07:49.690445 containerd[2056]: time="2025-11-05T15:07:49.690408413Z" level=info msg="CreateContainer within sandbox \"d2a832112334c0bd2a6932294281686ae5311aa3205fd37e01aa50dc97882481\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"461b61bc895ac6cfd3398081e51d82a7946849ca0ea58dbefbfe926a923dd0b4\"" Nov 5 15:07:49.691646 containerd[2056]: time="2025-11-05T15:07:49.691621464Z" level=info msg="StartContainer for \"461b61bc895ac6cfd3398081e51d82a7946849ca0ea58dbefbfe926a923dd0b4\"" Nov 5 15:07:49.693990 containerd[2056]: time="2025-11-05T15:07:49.693249355Z" level=info msg="connecting to shim 461b61bc895ac6cfd3398081e51d82a7946849ca0ea58dbefbfe926a923dd0b4" address="unix:///run/containerd/s/6249ef89167b4b5a2f7112de90564a81ac59cb4fe4c98be7a76c82da2fbfd9a9" protocol=ttrpc version=3 Nov 5 15:07:49.716690 systemd[1]: Started cri-containerd-461b61bc895ac6cfd3398081e51d82a7946849ca0ea58dbefbfe926a923dd0b4.scope - libcontainer container 461b61bc895ac6cfd3398081e51d82a7946849ca0ea58dbefbfe926a923dd0b4. Nov 5 15:07:49.726515 containerd[2056]: time="2025-11-05T15:07:49.726480303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nr498,Uid:5175072d-c97d-4e97-bbe9-4eb6c98f1e6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d78a3bb79f48dd2b6edc4b17279b94c36254bdbd86f4930e1b986e24cf949505\"" Nov 5 15:07:49.728492 containerd[2056]: time="2025-11-05T15:07:49.728463859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:07:49.746923 containerd[2056]: time="2025-11-05T15:07:49.746877420Z" level=info msg="StartContainer for \"461b61bc895ac6cfd3398081e51d82a7946849ca0ea58dbefbfe926a923dd0b4\" returns successfully" Nov 5 15:07:49.880478 systemd-networkd[1661]: calida03abcf4ca: Gained IPv6LL Nov 5 15:07:50.045423 containerd[2056]: time="2025-11-05T15:07:50.045343388Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:07:50.048266 containerd[2056]: time="2025-11-05T15:07:50.048226010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:07:50.048366 containerd[2056]: time="2025-11-05T15:07:50.048308572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:07:50.048972 kubelet[3589]: E1105 15:07:50.048484 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:07:50.048972 kubelet[3589]: E1105 15:07:50.048539 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:07:50.048972 kubelet[3589]: E1105 15:07:50.048611 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-nr498_calico-system(5175072d-c97d-4e97-bbe9-4eb6c98f1e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:07:50.050250 containerd[2056]: time="2025-11-05T15:07:50.049689426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:07:50.320028 containerd[2056]: time="2025-11-05T15:07:50.319976476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:07:50.323224 containerd[2056]: time="2025-11-05T15:07:50.323186746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:07:50.323631 containerd[2056]: time="2025-11-05T15:07:50.323238379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:07:50.323672 kubelet[3589]: E1105 15:07:50.323371 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:07:50.323672 kubelet[3589]: E1105 15:07:50.323407 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:07:50.323672 kubelet[3589]: E1105 15:07:50.323463 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-nr498_calico-system(5175072d-c97d-4e97-bbe9-4eb6c98f1e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:07:50.326874 kubelet[3589]: E1105 15:07:50.323518 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:07:50.329298 containerd[2056]: time="2025-11-05T15:07:50.329219742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84577cbdbb-g8xvq,Uid:918f7e6e-ae2a-455d-8758-01b9af03afc5,Namespace:calico-system,Attempt:0,}" Nov 5 15:07:50.333894 containerd[2056]: time="2025-11-05T15:07:50.333674015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c94bc65c5-xbp7r,Uid:38f05b09-e539-4b0f-aa00-1af242dcf380,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:07:50.438554 systemd-networkd[1661]: calif58a36fbc45: Link UP Nov 5 15:07:50.439841 systemd-networkd[1661]: calif58a36fbc45: Gained carrier Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.375 [INFO][5281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-eth0 calico-kube-controllers-84577cbdbb- calico-system 918f7e6e-ae2a-455d-8758-01b9af03afc5 867 0 2025-11-05 15:07:26 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84577cbdbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4487.0.1-a-05c7a88322 calico-kube-controllers-84577cbdbb-g8xvq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif58a36fbc45 [] [] }} ContainerID="416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" Namespace="calico-system" Pod="calico-kube-controllers-84577cbdbb-g8xvq" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-" Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.376 [INFO][5281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" Namespace="calico-system" Pod="calico-kube-controllers-84577cbdbb-g8xvq" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-eth0" Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.400 [INFO][5309] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" HandleID="k8s-pod-network.416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" Workload="ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-eth0" Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.401 [INFO][5309] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" HandleID="k8s-pod-network.416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" Workload="ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-a-05c7a88322", "pod":"calico-kube-controllers-84577cbdbb-g8xvq", "timestamp":"2025-11-05 15:07:50.400963321 +0000 UTC"}, Hostname:"ci-4487.0.1-a-05c7a88322", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.401 [INFO][5309] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.401 [INFO][5309] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.401 [INFO][5309] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-05c7a88322' Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.406 [INFO][5309] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.409 [INFO][5309] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.412 [INFO][5309] ipam/ipam.go 511: Trying affinity for 192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.413 [INFO][5309] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.415 [INFO][5309] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.415 [INFO][5309] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.128/26 handle="k8s-pod-network.416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.416 [INFO][5309] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.423 [INFO][5309] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.128/26 handle="k8s-pod-network.416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.432 [INFO][5309] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.134/26] block=192.168.50.128/26 handle="k8s-pod-network.416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.432 [INFO][5309] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.134/26] handle="k8s-pod-network.416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.432 [INFO][5309] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:07:50.457096 containerd[2056]: 2025-11-05 15:07:50.432 [INFO][5309] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.134/26] IPv6=[] ContainerID="416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" HandleID="k8s-pod-network.416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" Workload="ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-eth0" Nov 5 15:07:50.458507 containerd[2056]: 2025-11-05 15:07:50.434 [INFO][5281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" Namespace="calico-system" Pod="calico-kube-controllers-84577cbdbb-g8xvq" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-eth0", GenerateName:"calico-kube-controllers-84577cbdbb-", Namespace:"calico-system", SelfLink:"", UID:"918f7e6e-ae2a-455d-8758-01b9af03afc5", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84577cbdbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"", Pod:"calico-kube-controllers-84577cbdbb-g8xvq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif58a36fbc45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:50.458507 containerd[2056]: 2025-11-05 15:07:50.435 [INFO][5281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.134/32] ContainerID="416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" Namespace="calico-system" Pod="calico-kube-controllers-84577cbdbb-g8xvq" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-eth0" Nov 5 15:07:50.458507 containerd[2056]: 2025-11-05 15:07:50.435 [INFO][5281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif58a36fbc45 ContainerID="416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" Namespace="calico-system" Pod="calico-kube-controllers-84577cbdbb-g8xvq" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-eth0" Nov 5 15:07:50.458507 containerd[2056]: 2025-11-05 15:07:50.440 [INFO][5281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" Namespace="calico-system" Pod="calico-kube-controllers-84577cbdbb-g8xvq" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-eth0" Nov 5 15:07:50.458507 containerd[2056]: 2025-11-05 15:07:50.440 [INFO][5281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" Namespace="calico-system" Pod="calico-kube-controllers-84577cbdbb-g8xvq" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-eth0", GenerateName:"calico-kube-controllers-84577cbdbb-", Namespace:"calico-system", SelfLink:"", UID:"918f7e6e-ae2a-455d-8758-01b9af03afc5", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84577cbdbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a", Pod:"calico-kube-controllers-84577cbdbb-g8xvq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif58a36fbc45", MAC:"7a:9f:76:af:c8:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:50.458507 containerd[2056]: 2025-11-05 15:07:50.453 [INFO][5281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" Namespace="calico-system" Pod="calico-kube-controllers-84577cbdbb-g8xvq" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--kube--controllers--84577cbdbb--g8xvq-eth0" Nov 5 15:07:50.469900 kubelet[3589]: E1105 15:07:50.469746 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:07:50.471883 kubelet[3589]: E1105 15:07:50.471830 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lg4lc" podUID="460d4776-c8b3-4dec-911d-f1ebdf0cfa3b" Nov 5 15:07:50.506523 kubelet[3589]: I1105 15:07:50.506340 3589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-72tws" podStartSLOduration=38.506325849 podStartE2EDuration="38.506325849s" podCreationTimestamp="2025-11-05 15:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:07:50.505516 +0000 UTC m=+44.275463253" watchObservedRunningTime="2025-11-05 15:07:50.506325849 +0000 UTC m=+44.276273094" Nov 5 15:07:50.507044 containerd[2056]: time="2025-11-05T15:07:50.506958055Z" level=info msg="connecting to shim 416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a" address="unix:///run/containerd/s/bb3b5795dffd52dcc7613c9a84fb20f80cb255f6f369c061bf0cf2f3f6b96821" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:07:50.540501 systemd[1]: Started cri-containerd-416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a.scope - libcontainer container 416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a. Nov 5 15:07:50.572179 systemd-networkd[1661]: califb139d0a380: Link UP Nov 5 15:07:50.574040 systemd-networkd[1661]: califb139d0a380: Gained carrier Nov 5 15:07:50.584526 systemd-networkd[1661]: calie0a5fdb6d60: Gained IPv6LL Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.376 [INFO][5285] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-eth0 calico-apiserver-7c94bc65c5- calico-apiserver 38f05b09-e539-4b0f-aa00-1af242dcf380 869 0 2025-11-05 15:07:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c94bc65c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.1-a-05c7a88322 calico-apiserver-7c94bc65c5-xbp7r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califb139d0a380 [] [] }} ContainerID="6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-xbp7r" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-" Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.376 [INFO][5285] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-xbp7r" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-eth0" Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.401 [INFO][5307] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" HandleID="k8s-pod-network.6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" Workload="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-eth0" Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.401 [INFO][5307] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" HandleID="k8s-pod-network.6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" Workload="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3da0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.1-a-05c7a88322", "pod":"calico-apiserver-7c94bc65c5-xbp7r", "timestamp":"2025-11-05 15:07:50.401531654 +0000 UTC"}, Hostname:"ci-4487.0.1-a-05c7a88322", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.401 [INFO][5307] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.432 [INFO][5307] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.432 [INFO][5307] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-05c7a88322' Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.511 [INFO][5307] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.531 [INFO][5307] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.541 [INFO][5307] ipam/ipam.go 511: Trying affinity for 192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.543 [INFO][5307] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.545 [INFO][5307] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.545 [INFO][5307] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.128/26 handle="k8s-pod-network.6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.546 [INFO][5307] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172 Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.553 [INFO][5307] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.128/26 handle="k8s-pod-network.6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.562 [INFO][5307] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.135/26] block=192.168.50.128/26 handle="k8s-pod-network.6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.562 [INFO][5307] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.135/26] handle="k8s-pod-network.6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.562 [INFO][5307] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:07:50.594474 containerd[2056]: 2025-11-05 15:07:50.562 [INFO][5307] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.135/26] IPv6=[] ContainerID="6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" HandleID="k8s-pod-network.6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" Workload="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-eth0" Nov 5 15:07:50.595343 containerd[2056]: 2025-11-05 15:07:50.564 [INFO][5285] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-xbp7r" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-eth0", GenerateName:"calico-apiserver-7c94bc65c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"38f05b09-e539-4b0f-aa00-1af242dcf380", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c94bc65c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"", Pod:"calico-apiserver-7c94bc65c5-xbp7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb139d0a380", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:50.595343 containerd[2056]: 2025-11-05 15:07:50.565 [INFO][5285] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.135/32] ContainerID="6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-xbp7r" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-eth0" Nov 5 15:07:50.595343 containerd[2056]: 2025-11-05 15:07:50.565 [INFO][5285] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb139d0a380 ContainerID="6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-xbp7r" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-eth0" Nov 5 15:07:50.595343 containerd[2056]: 2025-11-05 15:07:50.574 [INFO][5285] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-xbp7r" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-eth0" Nov 5 15:07:50.595343 containerd[2056]: 2025-11-05 15:07:50.574 [INFO][5285] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-xbp7r" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-eth0", GenerateName:"calico-apiserver-7c94bc65c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"38f05b09-e539-4b0f-aa00-1af242dcf380", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c94bc65c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172", Pod:"calico-apiserver-7c94bc65c5-xbp7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califb139d0a380", MAC:"aa:8d:94:c4:c7:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:50.595343 containerd[2056]: 2025-11-05 15:07:50.590 [INFO][5285] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-xbp7r" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--xbp7r-eth0" Nov 5 15:07:50.598148 containerd[2056]: time="2025-11-05T15:07:50.598110849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84577cbdbb-g8xvq,Uid:918f7e6e-ae2a-455d-8758-01b9af03afc5,Namespace:calico-system,Attempt:0,} returns sandbox id \"416d4411a7b772699b9913a2d128fddce978aad141bc97e6bfc8e7891db9a27a\"" Nov 5 15:07:50.603756 containerd[2056]: time="2025-11-05T15:07:50.603682883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:07:50.644633 containerd[2056]: time="2025-11-05T15:07:50.644508148Z" level=info msg="connecting to shim 6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172" address="unix:///run/containerd/s/4234ea50fb4f2ea3e4f02584a9f59d2b6fe7186e2773b209572e930b30e89d07" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:07:50.663511 systemd[1]: Started cri-containerd-6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172.scope - libcontainer container 6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172. Nov 5 15:07:50.696783 containerd[2056]: time="2025-11-05T15:07:50.696740423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c94bc65c5-xbp7r,Uid:38f05b09-e539-4b0f-aa00-1af242dcf380,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6563728dc08f8305efc330ac3505eb5a19980cfbfe0bf77c7e7e1d9cc6784172\"" Nov 5 15:07:50.911205 containerd[2056]: time="2025-11-05T15:07:50.910857232Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:07:50.914041 containerd[2056]: time="2025-11-05T15:07:50.914007061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:07:50.914130 containerd[2056]: time="2025-11-05T15:07:50.914095127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:07:50.914363 kubelet[3589]: E1105 15:07:50.914315 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:07:50.914624 kubelet[3589]: E1105 15:07:50.914442 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:07:50.914679 kubelet[3589]: E1105 15:07:50.914651 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-84577cbdbb-g8xvq_calico-system(918f7e6e-ae2a-455d-8758-01b9af03afc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:07:50.914728 kubelet[3589]: E1105 15:07:50.914690 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" podUID="918f7e6e-ae2a-455d-8758-01b9af03afc5" Nov 5 15:07:50.915006 containerd[2056]: time="2025-11-05T15:07:50.914916049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:07:51.217455 containerd[2056]: time="2025-11-05T15:07:51.217164003Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:07:51.223884 containerd[2056]: time="2025-11-05T15:07:51.223777147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:07:51.223884 containerd[2056]: time="2025-11-05T15:07:51.223829772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:07:51.224205 kubelet[3589]: E1105 15:07:51.224159 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:07:51.224262 kubelet[3589]: E1105 15:07:51.224211 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:07:51.224306 kubelet[3589]: E1105 15:07:51.224279 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c94bc65c5-xbp7r_calico-apiserver(38f05b09-e539-4b0f-aa00-1af242dcf380): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:07:51.224338 kubelet[3589]: E1105 15:07:51.224313 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" podUID="38f05b09-e539-4b0f-aa00-1af242dcf380" Nov 5 15:07:51.327572 containerd[2056]: time="2025-11-05T15:07:51.327295043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lzrs9,Uid:397fff19-ce83-432c-bb32-ac5bd613b927,Namespace:kube-system,Attempt:0,}" Nov 5 15:07:51.334442 containerd[2056]: time="2025-11-05T15:07:51.334407598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c94bc65c5-b4vch,Uid:5f8444f4-0b9f-4af6-a67a-d71e5d4f1309,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:07:51.416525 systemd-networkd[1661]: cali039104339af: Gained IPv6LL Nov 5 15:07:51.453707 systemd-networkd[1661]: cali3e758949859: Link UP Nov 5 15:07:51.454400 systemd-networkd[1661]: cali3e758949859: Gained carrier Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.374 [INFO][5433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-eth0 coredns-66bc5c9577- kube-system 397fff19-ce83-432c-bb32-ac5bd613b927 861 0 2025-11-05 15:07:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.1-a-05c7a88322 coredns-66bc5c9577-lzrs9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3e758949859 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" Namespace="kube-system" Pod="coredns-66bc5c9577-lzrs9" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-" Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.375 [INFO][5433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" Namespace="kube-system" Pod="coredns-66bc5c9577-lzrs9" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-eth0" Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.409 [INFO][5457] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" HandleID="k8s-pod-network.cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" Workload="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-eth0" Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.409 [INFO][5457] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" HandleID="k8s-pod-network.cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" Workload="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c0fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.1-a-05c7a88322", "pod":"coredns-66bc5c9577-lzrs9", "timestamp":"2025-11-05 15:07:51.409185244 +0000 UTC"}, Hostname:"ci-4487.0.1-a-05c7a88322", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.409 [INFO][5457] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.409 [INFO][5457] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.409 [INFO][5457] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-05c7a88322' Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.414 [INFO][5457] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.419 [INFO][5457] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.422 [INFO][5457] ipam/ipam.go 511: Trying affinity for 192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.423 [INFO][5457] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.425 [INFO][5457] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.425 [INFO][5457] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.128/26 handle="k8s-pod-network.cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.426 [INFO][5457] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.431 [INFO][5457] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.128/26 handle="k8s-pod-network.cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.439 [INFO][5457] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.136/26] block=192.168.50.128/26 handle="k8s-pod-network.cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.439 [INFO][5457] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.136/26] handle="k8s-pod-network.cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.439 [INFO][5457] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:07:51.471261 containerd[2056]: 2025-11-05 15:07:51.439 [INFO][5457] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.136/26] IPv6=[] ContainerID="cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" HandleID="k8s-pod-network.cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" Workload="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-eth0" Nov 5 15:07:51.472643 containerd[2056]: 2025-11-05 15:07:51.442 [INFO][5433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" Namespace="kube-system" Pod="coredns-66bc5c9577-lzrs9" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"397fff19-ce83-432c-bb32-ac5bd613b927", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"", Pod:"coredns-66bc5c9577-lzrs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e758949859", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:51.472643 containerd[2056]: 2025-11-05 15:07:51.442 [INFO][5433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.136/32] ContainerID="cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" Namespace="kube-system" Pod="coredns-66bc5c9577-lzrs9" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-eth0" Nov 5 15:07:51.472643 containerd[2056]: 2025-11-05 15:07:51.442 [INFO][5433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e758949859 ContainerID="cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" Namespace="kube-system" Pod="coredns-66bc5c9577-lzrs9" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-eth0" Nov 5 15:07:51.472643 containerd[2056]: 2025-11-05 15:07:51.455 [INFO][5433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" Namespace="kube-system" Pod="coredns-66bc5c9577-lzrs9" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-eth0" Nov 5 15:07:51.472643 containerd[2056]: 2025-11-05 15:07:51.455 [INFO][5433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" Namespace="kube-system" Pod="coredns-66bc5c9577-lzrs9" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"397fff19-ce83-432c-bb32-ac5bd613b927", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb", Pod:"coredns-66bc5c9577-lzrs9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e758949859", MAC:"12:0e:a9:52:27:91", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:51.473228 containerd[2056]: 2025-11-05 15:07:51.468 [INFO][5433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" Namespace="kube-system" Pod="coredns-66bc5c9577-lzrs9" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-coredns--66bc5c9577--lzrs9-eth0" Nov 5 15:07:51.477290 kubelet[3589]: E1105 15:07:51.477257 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" podUID="38f05b09-e539-4b0f-aa00-1af242dcf380" Nov 5 15:07:51.482764 kubelet[3589]: E1105 15:07:51.482284 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" podUID="918f7e6e-ae2a-455d-8758-01b9af03afc5" Nov 5 15:07:51.484665 kubelet[3589]: E1105 15:07:51.482549 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:07:51.523385 containerd[2056]: time="2025-11-05T15:07:51.523326179Z" level=info msg="connecting to shim cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb" address="unix:///run/containerd/s/495f66be324d514cb1a0e2d61f0fb75588d91199aa85178041c0b67716140a27" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:07:51.545208 systemd-networkd[1661]: calif58a36fbc45: Gained IPv6LL Nov 5 15:07:51.565500 systemd[1]: Started cri-containerd-cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb.scope - libcontainer container cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb. Nov 5 15:07:51.578873 systemd-networkd[1661]: cali00fb489b9a0: Link UP Nov 5 15:07:51.580794 systemd-networkd[1661]: cali00fb489b9a0: Gained carrier Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.373 [INFO][5443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-eth0 calico-apiserver-7c94bc65c5- calico-apiserver 5f8444f4-0b9f-4af6-a67a-d71e5d4f1309 864 0 2025-11-05 15:07:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c94bc65c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.1-a-05c7a88322 calico-apiserver-7c94bc65c5-b4vch eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali00fb489b9a0 [] [] }} ContainerID="9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-b4vch" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-" Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.373 [INFO][5443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-b4vch" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-eth0" Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.409 [INFO][5458] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" HandleID="k8s-pod-network.9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" Workload="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-eth0" Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.409 [INFO][5458] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" HandleID="k8s-pod-network.9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" Workload="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb0f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.1-a-05c7a88322", "pod":"calico-apiserver-7c94bc65c5-b4vch", "timestamp":"2025-11-05 15:07:51.409035688 +0000 UTC"}, Hostname:"ci-4487.0.1-a-05c7a88322", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.409 [INFO][5458] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.439 [INFO][5458] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.440 [INFO][5458] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-a-05c7a88322' Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.520 [INFO][5458] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.528 [INFO][5458] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.548 [INFO][5458] ipam/ipam.go 511: Trying affinity for 192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.551 [INFO][5458] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.554 [INFO][5458] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.128/26 host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.554 [INFO][5458] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.128/26 handle="k8s-pod-network.9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.557 [INFO][5458] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.564 [INFO][5458] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.128/26 handle="k8s-pod-network.9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.574 [INFO][5458] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.137/26] block=192.168.50.128/26 handle="k8s-pod-network.9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.574 [INFO][5458] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.137/26] handle="k8s-pod-network.9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" host="ci-4487.0.1-a-05c7a88322" Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.574 [INFO][5458] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:07:51.609642 containerd[2056]: 2025-11-05 15:07:51.574 [INFO][5458] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.137/26] IPv6=[] ContainerID="9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" HandleID="k8s-pod-network.9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" Workload="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-eth0" Nov 5 15:07:51.610112 containerd[2056]: 2025-11-05 15:07:51.577 [INFO][5443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-b4vch" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-eth0", GenerateName:"calico-apiserver-7c94bc65c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f8444f4-0b9f-4af6-a67a-d71e5d4f1309", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c94bc65c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"", Pod:"calico-apiserver-7c94bc65c5-b4vch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00fb489b9a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:51.610112 containerd[2056]: 2025-11-05 15:07:51.577 [INFO][5443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.137/32] ContainerID="9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-b4vch" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-eth0" Nov 5 15:07:51.610112 containerd[2056]: 2025-11-05 15:07:51.577 [INFO][5443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00fb489b9a0 ContainerID="9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-b4vch" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-eth0" Nov 5 15:07:51.610112 containerd[2056]: 2025-11-05 15:07:51.581 [INFO][5443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-b4vch" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-eth0" Nov 5 15:07:51.610112 containerd[2056]: 2025-11-05 15:07:51.583 [INFO][5443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-b4vch" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-eth0", GenerateName:"calico-apiserver-7c94bc65c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f8444f4-0b9f-4af6-a67a-d71e5d4f1309", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 7, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c94bc65c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-a-05c7a88322", ContainerID:"9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c", Pod:"calico-apiserver-7c94bc65c5-b4vch", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00fb489b9a0", MAC:"32:6a:6d:e7:5a:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:07:51.610112 containerd[2056]: 2025-11-05 15:07:51.603 [INFO][5443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" Namespace="calico-apiserver" Pod="calico-apiserver-7c94bc65c5-b4vch" WorkloadEndpoint="ci--4487.0.1--a--05c7a88322-k8s-calico--apiserver--7c94bc65c5--b4vch-eth0" Nov 5 15:07:51.621045 containerd[2056]: time="2025-11-05T15:07:51.621010147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lzrs9,Uid:397fff19-ce83-432c-bb32-ac5bd613b927,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb\"" Nov 5 15:07:51.629072 containerd[2056]: time="2025-11-05T15:07:51.628976177Z" level=info msg="CreateContainer within sandbox \"cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:07:51.657862 containerd[2056]: time="2025-11-05T15:07:51.657818998Z" level=info msg="connecting to shim 9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c" address="unix:///run/containerd/s/2812f4e89e6f7dcc8c8488283597ccad4965563ae0bd957b57fab57c48193ef2" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:07:51.658089 containerd[2056]: time="2025-11-05T15:07:51.658067971Z" level=info msg="Container 3633c9649704cb1d45e0c621b341315c0105de9c5f765ead337fb208cb58d51d: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:07:51.675123 containerd[2056]: time="2025-11-05T15:07:51.675080670Z" level=info msg="CreateContainer within sandbox \"cf6573b7cdd5fa7523addc5ff5db91c92150c1ac5deb8d5a3fa223232aa9cbdb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3633c9649704cb1d45e0c621b341315c0105de9c5f765ead337fb208cb58d51d\"" Nov 5 15:07:51.677868 containerd[2056]: time="2025-11-05T15:07:51.677807929Z" level=info msg="StartContainer for \"3633c9649704cb1d45e0c621b341315c0105de9c5f765ead337fb208cb58d51d\"" Nov 5 15:07:51.681266 containerd[2056]: time="2025-11-05T15:07:51.681221276Z" level=info msg="connecting to shim 3633c9649704cb1d45e0c621b341315c0105de9c5f765ead337fb208cb58d51d" address="unix:///run/containerd/s/495f66be324d514cb1a0e2d61f0fb75588d91199aa85178041c0b67716140a27" protocol=ttrpc version=3 Nov 5 15:07:51.684575 systemd[1]: Started cri-containerd-9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c.scope - libcontainer container 9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c. Nov 5 15:07:51.704464 systemd[1]: Started cri-containerd-3633c9649704cb1d45e0c621b341315c0105de9c5f765ead337fb208cb58d51d.scope - libcontainer container 3633c9649704cb1d45e0c621b341315c0105de9c5f765ead337fb208cb58d51d. Nov 5 15:07:51.728994 containerd[2056]: time="2025-11-05T15:07:51.728826369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c94bc65c5-b4vch,Uid:5f8444f4-0b9f-4af6-a67a-d71e5d4f1309,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9a6355df604c710ecddd3904d8a1abebb1f84566f5884eb54b6094f27cc0755c\"" Nov 5 15:07:51.731786 containerd[2056]: time="2025-11-05T15:07:51.731733032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:07:51.743712 containerd[2056]: time="2025-11-05T15:07:51.743636316Z" level=info msg="StartContainer for \"3633c9649704cb1d45e0c621b341315c0105de9c5f765ead337fb208cb58d51d\" returns successfully" Nov 5 15:07:51.928552 systemd-networkd[1661]: califb139d0a380: Gained IPv6LL Nov 5 15:07:52.012818 containerd[2056]: time="2025-11-05T15:07:52.012738900Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:07:52.015763 containerd[2056]: time="2025-11-05T15:07:52.015720293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:07:52.015828 containerd[2056]: time="2025-11-05T15:07:52.015809487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:07:52.016047 kubelet[3589]: E1105 15:07:52.016000 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:07:52.016100 kubelet[3589]: E1105 15:07:52.016055 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:07:52.016148 kubelet[3589]: E1105 15:07:52.016126 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c94bc65c5-b4vch_calico-apiserver(5f8444f4-0b9f-4af6-a67a-d71e5d4f1309): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:07:52.016180 kubelet[3589]: E1105 15:07:52.016160 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" podUID="5f8444f4-0b9f-4af6-a67a-d71e5d4f1309" Nov 5 15:07:52.484672 kubelet[3589]: E1105 15:07:52.484441 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" podUID="5f8444f4-0b9f-4af6-a67a-d71e5d4f1309" Nov 5 15:07:52.489252 kubelet[3589]: E1105 15:07:52.489202 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" podUID="38f05b09-e539-4b0f-aa00-1af242dcf380" Nov 5 15:07:52.490591 kubelet[3589]: E1105 15:07:52.490562 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" podUID="918f7e6e-ae2a-455d-8758-01b9af03afc5" Nov 5 15:07:52.824700 systemd-networkd[1661]: cali3e758949859: Gained IPv6LL Nov 5 15:07:53.080543 systemd-networkd[1661]: cali00fb489b9a0: Gained IPv6LL Nov 5 15:07:53.490156 kubelet[3589]: E1105 15:07:53.489748 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" podUID="5f8444f4-0b9f-4af6-a67a-d71e5d4f1309" Nov 5 15:07:53.505450 kubelet[3589]: I1105 15:07:53.505348 3589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lzrs9" podStartSLOduration=41.505332444 podStartE2EDuration="41.505332444s" podCreationTimestamp="2025-11-05 15:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:07:52.541059079 +0000 UTC m=+46.311006340" watchObservedRunningTime="2025-11-05 15:07:53.505332444 +0000 UTC m=+47.275279689" Nov 5 15:07:58.324553 containerd[2056]: time="2025-11-05T15:07:58.324202998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:07:58.577110 containerd[2056]: time="2025-11-05T15:07:58.576967816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:07:58.580786 containerd[2056]: time="2025-11-05T15:07:58.580737624Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:07:58.580888 containerd[2056]: time="2025-11-05T15:07:58.580831242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:07:58.581016 kubelet[3589]: E1105 15:07:58.580974 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:07:58.581278 kubelet[3589]: E1105 15:07:58.581023 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:07:58.581278 kubelet[3589]: E1105 15:07:58.581133 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66c5db86d6-4vr9m_calico-system(7b333bcc-6bcd-4c30-b1c0-19548616455c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:07:58.582715 containerd[2056]: time="2025-11-05T15:07:58.582692629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:07:58.877400 containerd[2056]: time="2025-11-05T15:07:58.876961049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:07:58.880317 containerd[2056]: time="2025-11-05T15:07:58.880211541Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:07:58.880317 containerd[2056]: time="2025-11-05T15:07:58.880233501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:07:58.880746 kubelet[3589]: E1105 15:07:58.880648 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:07:58.880810 kubelet[3589]: E1105 15:07:58.880773 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:07:58.880878 kubelet[3589]: E1105 15:07:58.880855 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66c5db86d6-4vr9m_calico-system(7b333bcc-6bcd-4c30-b1c0-19548616455c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:07:58.881012 kubelet[3589]: E1105 15:07:58.880893 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c5db86d6-4vr9m" podUID="7b333bcc-6bcd-4c30-b1c0-19548616455c" Nov 5 15:08:02.323056 containerd[2056]: time="2025-11-05T15:08:02.323012649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:08:02.662752 containerd[2056]: time="2025-11-05T15:08:02.662383058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:02.665819 containerd[2056]: time="2025-11-05T15:08:02.665647317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:08:02.665819 containerd[2056]: time="2025-11-05T15:08:02.665680869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:08:02.666183 kubelet[3589]: E1105 15:08:02.665844 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:08:02.666183 kubelet[3589]: E1105 15:08:02.665881 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:08:02.666183 kubelet[3589]: E1105 15:08:02.665948 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65d654bb8-d6fmx_calico-apiserver(c1f198dc-261d-4ac5-8860-91734a6c009d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:02.666183 kubelet[3589]: E1105 15:08:02.665973 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" podUID="c1f198dc-261d-4ac5-8860-91734a6c009d" Nov 5 15:08:03.322650 containerd[2056]: time="2025-11-05T15:08:03.322538547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:08:03.630055 containerd[2056]: time="2025-11-05T15:08:03.629916258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:03.633008 containerd[2056]: time="2025-11-05T15:08:03.632965889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:08:03.633420 containerd[2056]: time="2025-11-05T15:08:03.633046971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:08:03.633468 kubelet[3589]: E1105 15:08:03.633193 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:08:03.633468 kubelet[3589]: E1105 15:08:03.633241 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:08:03.633468 kubelet[3589]: E1105 15:08:03.633310 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-84577cbdbb-g8xvq_calico-system(918f7e6e-ae2a-455d-8758-01b9af03afc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:03.633468 kubelet[3589]: E1105 15:08:03.633338 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" podUID="918f7e6e-ae2a-455d-8758-01b9af03afc5" Nov 5 15:08:04.324342 containerd[2056]: time="2025-11-05T15:08:04.323909286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:08:04.717865 containerd[2056]: time="2025-11-05T15:08:04.717742725Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:04.721210 containerd[2056]: time="2025-11-05T15:08:04.721101658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:08:04.721210 containerd[2056]: time="2025-11-05T15:08:04.721189451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:08:04.721481 kubelet[3589]: E1105 15:08:04.721451 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:08:04.721847 kubelet[3589]: E1105 15:08:04.721709 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:08:04.721847 kubelet[3589]: E1105 15:08:04.721794 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lg4lc_calico-system(460d4776-c8b3-4dec-911d-f1ebdf0cfa3b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:04.721847 kubelet[3589]: E1105 15:08:04.721820 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lg4lc" podUID="460d4776-c8b3-4dec-911d-f1ebdf0cfa3b" Nov 5 15:08:05.322649 containerd[2056]: time="2025-11-05T15:08:05.322542233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:08:05.709383 containerd[2056]: time="2025-11-05T15:08:05.709243757Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:05.712146 containerd[2056]: time="2025-11-05T15:08:05.712110239Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:08:05.712278 containerd[2056]: time="2025-11-05T15:08:05.712135320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:08:05.712376 kubelet[3589]: E1105 15:08:05.712318 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:08:05.712424 kubelet[3589]: E1105 15:08:05.712387 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:08:05.712470 kubelet[3589]: E1105 15:08:05.712452 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c94bc65c5-xbp7r_calico-apiserver(38f05b09-e539-4b0f-aa00-1af242dcf380): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:05.712500 kubelet[3589]: E1105 15:08:05.712482 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" podUID="38f05b09-e539-4b0f-aa00-1af242dcf380" Nov 5 15:08:06.323497 containerd[2056]: time="2025-11-05T15:08:06.323463365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:08:06.630972 containerd[2056]: time="2025-11-05T15:08:06.630841739Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:06.634438 containerd[2056]: time="2025-11-05T15:08:06.634396861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:08:06.634482 containerd[2056]: time="2025-11-05T15:08:06.634475863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:08:06.634703 kubelet[3589]: E1105 15:08:06.634633 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:08:06.634703 kubelet[3589]: E1105 15:08:06.634687 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:08:06.635273 kubelet[3589]: E1105 15:08:06.635082 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-nr498_calico-system(5175072d-c97d-4e97-bbe9-4eb6c98f1e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:06.637878 containerd[2056]: time="2025-11-05T15:08:06.637650480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:08:06.940300 containerd[2056]: time="2025-11-05T15:08:06.940171758Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:06.943813 containerd[2056]: time="2025-11-05T15:08:06.943755929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:08:06.943813 containerd[2056]: time="2025-11-05T15:08:06.943781841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:08:06.943999 kubelet[3589]: E1105 15:08:06.943954 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:08:06.944038 kubelet[3589]: E1105 15:08:06.943999 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:08:06.944096 kubelet[3589]: E1105 15:08:06.944074 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-nr498_calico-system(5175072d-c97d-4e97-bbe9-4eb6c98f1e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:06.944155 kubelet[3589]: E1105 15:08:06.944111 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:08:07.323181 containerd[2056]: time="2025-11-05T15:08:07.322541248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:08:07.578730 containerd[2056]: time="2025-11-05T15:08:07.578609189Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:07.581589 containerd[2056]: time="2025-11-05T15:08:07.581535393Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:08:07.581664 containerd[2056]: time="2025-11-05T15:08:07.581632939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:08:07.581804 kubelet[3589]: E1105 15:08:07.581766 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:08:07.581849 kubelet[3589]: E1105 15:08:07.581808 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:08:07.581896 kubelet[3589]: E1105 15:08:07.581877 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c94bc65c5-b4vch_calico-apiserver(5f8444f4-0b9f-4af6-a67a-d71e5d4f1309): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:07.581934 kubelet[3589]: E1105 15:08:07.581907 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" podUID="5f8444f4-0b9f-4af6-a67a-d71e5d4f1309" Nov 5 15:08:10.322989 kubelet[3589]: E1105 15:08:10.322911 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c5db86d6-4vr9m" podUID="7b333bcc-6bcd-4c30-b1c0-19548616455c" Nov 5 15:08:12.502901 containerd[2056]: time="2025-11-05T15:08:12.502857702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20607295be6c34d0e78530a06ddff2fadef7a4d6779a246f9622052b0d6bb9b9\" id:\"3a94d23773cba71311cb17581396dbf4e69b86882cdedca2bd1d2c73d24d66f5\" pid:5653 exited_at:{seconds:1762355292 nanos:502576336}" Nov 5 15:08:15.322206 kubelet[3589]: E1105 15:08:15.322120 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" podUID="c1f198dc-261d-4ac5-8860-91734a6c009d" Nov 5 15:08:17.323545 kubelet[3589]: E1105 15:08:17.323493 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" podUID="918f7e6e-ae2a-455d-8758-01b9af03afc5" Nov 5 15:08:18.323656 kubelet[3589]: E1105 15:08:18.323542 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" podUID="38f05b09-e539-4b0f-aa00-1af242dcf380" Nov 5 15:08:18.325752 kubelet[3589]: E1105 15:08:18.325247 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:08:20.325386 kubelet[3589]: E1105 15:08:20.324561 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lg4lc" podUID="460d4776-c8b3-4dec-911d-f1ebdf0cfa3b" Nov 5 15:08:20.327834 kubelet[3589]: E1105 15:08:20.327601 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" podUID="5f8444f4-0b9f-4af6-a67a-d71e5d4f1309" Nov 5 15:08:20.413700 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:42728.service - OpenSSH per-connection server daemon (10.200.16.10:42728). Nov 5 15:08:20.876380 sshd[5673]: Accepted publickey for core from 10.200.16.10 port 42728 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:08:20.879183 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:08:20.887417 systemd-logind[2029]: New session 10 of user core. Nov 5 15:08:20.891477 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:08:21.254391 sshd[5676]: Connection closed by 10.200.16.10 port 42728 Nov 5 15:08:21.254571 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Nov 5 15:08:21.262751 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:42728.service: Deactivated successfully. Nov 5 15:08:21.264106 systemd-logind[2029]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:08:21.264716 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:08:21.265956 systemd-logind[2029]: Removed session 10. Nov 5 15:08:24.323656 containerd[2056]: time="2025-11-05T15:08:24.323209051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:08:24.620753 containerd[2056]: time="2025-11-05T15:08:24.620496481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:24.623479 containerd[2056]: time="2025-11-05T15:08:24.623419169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:08:24.623479 containerd[2056]: time="2025-11-05T15:08:24.623453482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:08:24.623659 kubelet[3589]: E1105 15:08:24.623608 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:08:24.624005 kubelet[3589]: E1105 15:08:24.623664 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:08:24.624005 kubelet[3589]: E1105 15:08:24.623732 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66c5db86d6-4vr9m_calico-system(7b333bcc-6bcd-4c30-b1c0-19548616455c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:24.624757 containerd[2056]: time="2025-11-05T15:08:24.624726910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:08:25.083441 containerd[2056]: time="2025-11-05T15:08:25.083388577Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:25.086543 containerd[2056]: time="2025-11-05T15:08:25.086495342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:08:25.087384 containerd[2056]: time="2025-11-05T15:08:25.086582431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:08:25.087433 kubelet[3589]: E1105 15:08:25.086725 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:08:25.087433 kubelet[3589]: E1105 15:08:25.086773 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:08:25.087607 kubelet[3589]: E1105 15:08:25.087585 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66c5db86d6-4vr9m_calico-system(7b333bcc-6bcd-4c30-b1c0-19548616455c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:25.088013 kubelet[3589]: E1105 15:08:25.087989 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c5db86d6-4vr9m" podUID="7b333bcc-6bcd-4c30-b1c0-19548616455c" Nov 5 15:08:26.325900 containerd[2056]: time="2025-11-05T15:08:26.325556988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:08:26.341402 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:42738.service - OpenSSH per-connection server daemon (10.200.16.10:42738). Nov 5 15:08:26.586719 containerd[2056]: time="2025-11-05T15:08:26.586594789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:26.590168 containerd[2056]: time="2025-11-05T15:08:26.590065594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:08:26.590168 containerd[2056]: time="2025-11-05T15:08:26.590120859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:08:26.591535 kubelet[3589]: E1105 15:08:26.591493 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:08:26.592029 kubelet[3589]: E1105 15:08:26.591856 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:08:26.592029 kubelet[3589]: E1105 15:08:26.591962 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65d654bb8-d6fmx_calico-apiserver(c1f198dc-261d-4ac5-8860-91734a6c009d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:26.592029 kubelet[3589]: E1105 15:08:26.591998 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" podUID="c1f198dc-261d-4ac5-8860-91734a6c009d" Nov 5 15:08:26.812554 sshd[5695]: Accepted publickey for core from 10.200.16.10 port 42738 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:08:26.815461 sshd-session[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:08:26.821681 systemd-logind[2029]: New session 11 of user core. Nov 5 15:08:26.826527 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:08:27.190894 sshd[5698]: Connection closed by 10.200.16.10 port 42738 Nov 5 15:08:27.191519 sshd-session[5695]: pam_unix(sshd:session): session closed for user core Nov 5 15:08:27.194944 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:42738.service: Deactivated successfully. Nov 5 15:08:27.196468 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:08:27.198394 systemd-logind[2029]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:08:27.199310 systemd-logind[2029]: Removed session 11. Nov 5 15:08:29.324038 containerd[2056]: time="2025-11-05T15:08:29.323803662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:08:29.604789 containerd[2056]: time="2025-11-05T15:08:29.604529296Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:29.607605 containerd[2056]: time="2025-11-05T15:08:29.607558043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:08:29.607897 containerd[2056]: time="2025-11-05T15:08:29.607620708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:08:29.608123 kubelet[3589]: E1105 15:08:29.608053 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:08:29.608123 kubelet[3589]: E1105 15:08:29.608100 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:08:29.609304 kubelet[3589]: E1105 15:08:29.608400 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-84577cbdbb-g8xvq_calico-system(918f7e6e-ae2a-455d-8758-01b9af03afc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:29.609304 kubelet[3589]: E1105 15:08:29.608446 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" podUID="918f7e6e-ae2a-455d-8758-01b9af03afc5" Nov 5 15:08:31.324500 containerd[2056]: time="2025-11-05T15:08:31.324086929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:08:31.651929 containerd[2056]: time="2025-11-05T15:08:31.651805338Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:31.655782 containerd[2056]: time="2025-11-05T15:08:31.655675407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:08:31.656112 containerd[2056]: time="2025-11-05T15:08:31.655678071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:08:31.656298 kubelet[3589]: E1105 15:08:31.656260 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:08:31.657380 kubelet[3589]: E1105 15:08:31.656343 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:08:31.657380 kubelet[3589]: E1105 15:08:31.656642 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c94bc65c5-xbp7r_calico-apiserver(38f05b09-e539-4b0f-aa00-1af242dcf380): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:31.657465 kubelet[3589]: E1105 15:08:31.657436 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" podUID="38f05b09-e539-4b0f-aa00-1af242dcf380" Nov 5 15:08:32.274197 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:38658.service - OpenSSH per-connection server daemon (10.200.16.10:38658). Nov 5 15:08:32.324529 containerd[2056]: time="2025-11-05T15:08:32.324367497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:08:32.732952 sshd[5712]: Accepted publickey for core from 10.200.16.10 port 38658 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:08:32.734241 sshd-session[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:08:32.739847 systemd-logind[2029]: New session 12 of user core. Nov 5 15:08:32.745571 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:08:32.867189 containerd[2056]: time="2025-11-05T15:08:32.866994212Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:32.870042 containerd[2056]: time="2025-11-05T15:08:32.869932709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:08:32.870042 containerd[2056]: time="2025-11-05T15:08:32.870021959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:08:32.870739 kubelet[3589]: E1105 15:08:32.870547 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:08:32.870739 kubelet[3589]: E1105 15:08:32.870603 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:08:32.870739 kubelet[3589]: E1105 15:08:32.870675 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c94bc65c5-b4vch_calico-apiserver(5f8444f4-0b9f-4af6-a67a-d71e5d4f1309): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:32.870739 kubelet[3589]: E1105 15:08:32.870702 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" podUID="5f8444f4-0b9f-4af6-a67a-d71e5d4f1309" Nov 5 15:08:33.118098 sshd[5715]: Connection closed by 10.200.16.10 port 38658 Nov 5 15:08:33.120559 sshd-session[5712]: pam_unix(sshd:session): session closed for user core Nov 5 15:08:33.124411 systemd-logind[2029]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:08:33.124680 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:38658.service: Deactivated successfully. Nov 5 15:08:33.128136 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:08:33.132094 systemd-logind[2029]: Removed session 12. Nov 5 15:08:33.201993 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:38660.service - OpenSSH per-connection server daemon (10.200.16.10:38660). Nov 5 15:08:33.323548 containerd[2056]: time="2025-11-05T15:08:33.323418170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:08:33.604382 containerd[2056]: time="2025-11-05T15:08:33.604125076Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:33.607922 containerd[2056]: time="2025-11-05T15:08:33.607806141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:08:33.607922 containerd[2056]: time="2025-11-05T15:08:33.607810197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:08:33.608144 kubelet[3589]: E1105 15:08:33.608087 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:08:33.608196 kubelet[3589]: E1105 15:08:33.608152 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:08:33.608532 kubelet[3589]: E1105 15:08:33.608502 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-nr498_calico-system(5175072d-c97d-4e97-bbe9-4eb6c98f1e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:33.611105 containerd[2056]: time="2025-11-05T15:08:33.610743294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:08:33.662792 sshd[5727]: Accepted publickey for core from 10.200.16.10 port 38660 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:08:33.664892 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:08:33.670476 systemd-logind[2029]: New session 13 of user core. Nov 5 15:08:33.679523 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:08:33.923433 containerd[2056]: time="2025-11-05T15:08:33.922869051Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:33.926015 containerd[2056]: time="2025-11-05T15:08:33.925968872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:08:33.926136 containerd[2056]: time="2025-11-05T15:08:33.926105939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:08:33.926660 kubelet[3589]: E1105 15:08:33.926594 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:08:33.926660 kubelet[3589]: E1105 15:08:33.926644 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:08:33.928237 kubelet[3589]: E1105 15:08:33.928150 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-nr498_calico-system(5175072d-c97d-4e97-bbe9-4eb6c98f1e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:33.928237 kubelet[3589]: E1105 15:08:33.928201 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:08:34.074249 sshd[5730]: Connection closed by 10.200.16.10 port 38660 Nov 5 15:08:34.074880 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Nov 5 15:08:34.080329 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:38660.service: Deactivated successfully. Nov 5 15:08:34.083294 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:08:34.085542 systemd-logind[2029]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:08:34.087798 systemd-logind[2029]: Removed session 13. Nov 5 15:08:34.158803 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:38674.service - OpenSSH per-connection server daemon (10.200.16.10:38674). Nov 5 15:08:34.324439 containerd[2056]: time="2025-11-05T15:08:34.323957100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:08:34.623646 sshd[5740]: Accepted publickey for core from 10.200.16.10 port 38674 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:08:34.624177 sshd-session[5740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:08:34.631970 systemd-logind[2029]: New session 14 of user core. Nov 5 15:08:34.634694 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:08:34.642161 containerd[2056]: time="2025-11-05T15:08:34.642012573Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:08:34.644995 containerd[2056]: time="2025-11-05T15:08:34.644886452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:08:34.644995 containerd[2056]: time="2025-11-05T15:08:34.644970222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:08:34.645178 kubelet[3589]: E1105 15:08:34.645118 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:08:34.645260 kubelet[3589]: E1105 15:08:34.645184 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:08:34.645279 kubelet[3589]: E1105 15:08:34.645268 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lg4lc_calico-system(460d4776-c8b3-4dec-911d-f1ebdf0cfa3b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:08:34.645613 kubelet[3589]: E1105 15:08:34.645585 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lg4lc" podUID="460d4776-c8b3-4dec-911d-f1ebdf0cfa3b" Nov 5 15:08:35.013637 sshd[5743]: Connection closed by 10.200.16.10 port 38674 Nov 5 15:08:35.016121 sshd-session[5740]: pam_unix(sshd:session): session closed for user core Nov 5 15:08:35.019475 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:38674.service: Deactivated successfully. Nov 5 15:08:35.021287 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:08:35.022181 systemd-logind[2029]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:08:35.024110 systemd-logind[2029]: Removed session 14. Nov 5 15:08:38.326868 kubelet[3589]: E1105 15:08:38.326806 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c5db86d6-4vr9m" podUID="7b333bcc-6bcd-4c30-b1c0-19548616455c" Nov 5 15:08:39.322395 kubelet[3589]: E1105 15:08:39.322191 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" podUID="c1f198dc-261d-4ac5-8860-91734a6c009d" Nov 5 15:08:40.096531 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:59246.service - OpenSSH per-connection server daemon (10.200.16.10:59246). Nov 5 15:08:40.556152 sshd[5759]: Accepted publickey for core from 10.200.16.10 port 59246 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:08:40.558312 sshd-session[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:08:40.564413 systemd-logind[2029]: New session 15 of user core. Nov 5 15:08:40.569517 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:08:40.947165 sshd[5762]: Connection closed by 10.200.16.10 port 59246 Nov 5 15:08:40.946894 sshd-session[5759]: pam_unix(sshd:session): session closed for user core Nov 5 15:08:40.951267 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:59246.service: Deactivated successfully. Nov 5 15:08:40.954987 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:08:40.956148 systemd-logind[2029]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:08:40.957671 systemd-logind[2029]: Removed session 15. Nov 5 15:08:42.325838 kubelet[3589]: E1105 15:08:42.325788 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" podUID="918f7e6e-ae2a-455d-8758-01b9af03afc5" Nov 5 15:08:42.520849 containerd[2056]: time="2025-11-05T15:08:42.520808836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20607295be6c34d0e78530a06ddff2fadef7a4d6779a246f9622052b0d6bb9b9\" id:\"09e858e2b18be069a332e95fa0309a7b76ab2ee1da13a42001866acac8b9050e\" pid:5785 exited_at:{seconds:1762355322 nanos:520549462}" Nov 5 15:08:43.322558 kubelet[3589]: E1105 15:08:43.322457 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" podUID="38f05b09-e539-4b0f-aa00-1af242dcf380" Nov 5 15:08:46.034602 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:59258.service - OpenSSH per-connection server daemon (10.200.16.10:59258). Nov 5 15:08:46.328838 kubelet[3589]: E1105 15:08:46.328288 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:08:46.500935 sshd[5800]: Accepted publickey for core from 10.200.16.10 port 59258 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:08:46.502339 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:08:46.508023 systemd-logind[2029]: New session 16 of user core. Nov 5 15:08:46.512500 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:08:46.869699 sshd[5803]: Connection closed by 10.200.16.10 port 59258 Nov 5 15:08:46.870306 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Nov 5 15:08:46.873798 systemd-logind[2029]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:08:46.874530 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:59258.service: Deactivated successfully. Nov 5 15:08:46.876224 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:08:46.878520 systemd-logind[2029]: Removed session 16. Nov 5 15:08:48.324108 kubelet[3589]: E1105 15:08:48.324015 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" podUID="5f8444f4-0b9f-4af6-a67a-d71e5d4f1309" Nov 5 15:08:49.324511 kubelet[3589]: E1105 15:08:49.324203 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lg4lc" podUID="460d4776-c8b3-4dec-911d-f1ebdf0cfa3b" Nov 5 15:08:51.956706 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:41880.service - OpenSSH per-connection server daemon (10.200.16.10:41880). Nov 5 15:08:52.324189 kubelet[3589]: E1105 15:08:52.323897 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" podUID="c1f198dc-261d-4ac5-8860-91734a6c009d" Nov 5 15:08:52.422159 sshd[5815]: Accepted publickey for core from 10.200.16.10 port 41880 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:08:52.423836 sshd-session[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:08:52.429882 systemd-logind[2029]: New session 17 of user core. Nov 5 15:08:52.436038 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:08:52.793003 sshd[5818]: Connection closed by 10.200.16.10 port 41880 Nov 5 15:08:52.792825 sshd-session[5815]: pam_unix(sshd:session): session closed for user core Nov 5 15:08:52.797167 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:41880.service: Deactivated successfully. Nov 5 15:08:52.800012 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:08:52.801752 systemd-logind[2029]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:08:52.803257 systemd-logind[2029]: Removed session 17. Nov 5 15:08:53.324151 kubelet[3589]: E1105 15:08:53.324068 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c5db86d6-4vr9m" podUID="7b333bcc-6bcd-4c30-b1c0-19548616455c" Nov 5 15:08:57.323130 kubelet[3589]: E1105 15:08:57.323089 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" podUID="38f05b09-e539-4b0f-aa00-1af242dcf380" Nov 5 15:08:57.324818 kubelet[3589]: E1105 15:08:57.324786 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" podUID="918f7e6e-ae2a-455d-8758-01b9af03afc5" Nov 5 15:08:57.877354 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:41896.service - OpenSSH per-connection server daemon (10.200.16.10:41896). Nov 5 15:08:58.342817 sshd[5830]: Accepted publickey for core from 10.200.16.10 port 41896 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:08:58.343794 sshd-session[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:08:58.347794 systemd-logind[2029]: New session 18 of user core. Nov 5 15:08:58.353482 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:08:58.758389 sshd[5833]: Connection closed by 10.200.16.10 port 41896 Nov 5 15:08:58.759577 sshd-session[5830]: pam_unix(sshd:session): session closed for user core Nov 5 15:08:58.762689 systemd-logind[2029]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:08:58.765032 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:41896.service: Deactivated successfully. Nov 5 15:08:58.768638 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:08:58.770964 systemd-logind[2029]: Removed session 18. Nov 5 15:08:58.834607 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:41912.service - OpenSSH per-connection server daemon (10.200.16.10:41912). Nov 5 15:08:59.251268 sshd[5845]: Accepted publickey for core from 10.200.16.10 port 41912 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:08:59.252885 sshd-session[5845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:08:59.256973 systemd-logind[2029]: New session 19 of user core. Nov 5 15:08:59.261493 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:08:59.325171 kubelet[3589]: E1105 15:08:59.325120 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:08:59.732462 sshd[5848]: Connection closed by 10.200.16.10 port 41912 Nov 5 15:08:59.732677 sshd-session[5845]: pam_unix(sshd:session): session closed for user core Nov 5 15:08:59.736947 systemd-logind[2029]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:08:59.737034 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:41912.service: Deactivated successfully. Nov 5 15:08:59.738559 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:08:59.740179 systemd-logind[2029]: Removed session 19. Nov 5 15:08:59.816579 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:40048.service - OpenSSH per-connection server daemon (10.200.16.10:40048). Nov 5 15:09:00.276482 sshd[5859]: Accepted publickey for core from 10.200.16.10 port 40048 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:09:00.277696 sshd-session[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:09:00.284352 systemd-logind[2029]: New session 20 of user core. Nov 5 15:09:00.290248 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:09:00.324713 kubelet[3589]: E1105 15:09:00.324660 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" podUID="5f8444f4-0b9f-4af6-a67a-d71e5d4f1309" Nov 5 15:09:01.408732 sshd[5862]: Connection closed by 10.200.16.10 port 40048 Nov 5 15:09:01.411209 sshd-session[5859]: pam_unix(sshd:session): session closed for user core Nov 5 15:09:01.415564 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:40048.service: Deactivated successfully. Nov 5 15:09:01.417657 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:09:01.419147 systemd-logind[2029]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:09:01.421114 systemd-logind[2029]: Removed session 20. Nov 5 15:09:01.493339 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:40052.service - OpenSSH per-connection server daemon (10.200.16.10:40052). Nov 5 15:09:01.954599 sshd[5878]: Accepted publickey for core from 10.200.16.10 port 40052 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:09:01.955838 sshd-session[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:09:01.960052 systemd-logind[2029]: New session 21 of user core. Nov 5 15:09:01.966494 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:09:02.327192 kubelet[3589]: E1105 15:09:02.326928 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lg4lc" podUID="460d4776-c8b3-4dec-911d-f1ebdf0cfa3b" Nov 5 15:09:02.469548 sshd[5881]: Connection closed by 10.200.16.10 port 40052 Nov 5 15:09:02.471555 sshd-session[5878]: pam_unix(sshd:session): session closed for user core Nov 5 15:09:02.474848 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:40052.service: Deactivated successfully. Nov 5 15:09:02.476704 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:09:02.479623 systemd-logind[2029]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:09:02.481616 systemd-logind[2029]: Removed session 21. Nov 5 15:09:02.565610 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:40060.service - OpenSSH per-connection server daemon (10.200.16.10:40060). Nov 5 15:09:03.021049 sshd[5893]: Accepted publickey for core from 10.200.16.10 port 40060 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:09:03.022143 sshd-session[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:09:03.025865 systemd-logind[2029]: New session 22 of user core. Nov 5 15:09:03.034483 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:09:03.390186 sshd[5896]: Connection closed by 10.200.16.10 port 40060 Nov 5 15:09:03.392799 sshd-session[5893]: pam_unix(sshd:session): session closed for user core Nov 5 15:09:03.397191 systemd-logind[2029]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:09:03.398566 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:40060.service: Deactivated successfully. Nov 5 15:09:03.400045 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:09:03.403938 systemd-logind[2029]: Removed session 22. Nov 5 15:09:05.323656 containerd[2056]: time="2025-11-05T15:09:05.322826690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:09:05.637049 containerd[2056]: time="2025-11-05T15:09:05.636906099Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:09:05.640565 containerd[2056]: time="2025-11-05T15:09:05.640518117Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:09:05.640699 containerd[2056]: time="2025-11-05T15:09:05.640531581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:09:05.640794 kubelet[3589]: E1105 15:09:05.640746 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:09:05.641042 kubelet[3589]: E1105 15:09:05.640799 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:09:05.641042 kubelet[3589]: E1105 15:09:05.640903 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66c5db86d6-4vr9m_calico-system(7b333bcc-6bcd-4c30-b1c0-19548616455c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:09:05.642958 containerd[2056]: time="2025-11-05T15:09:05.642931838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:09:06.283678 containerd[2056]: time="2025-11-05T15:09:06.283484413Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:09:06.286675 containerd[2056]: time="2025-11-05T15:09:06.286588068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:09:06.286675 containerd[2056]: time="2025-11-05T15:09:06.286635541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:09:06.286926 kubelet[3589]: E1105 15:09:06.286887 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:09:06.287048 kubelet[3589]: E1105 15:09:06.287033 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:09:06.287197 kubelet[3589]: E1105 15:09:06.287150 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66c5db86d6-4vr9m_calico-system(7b333bcc-6bcd-4c30-b1c0-19548616455c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:09:06.287319 kubelet[3589]: E1105 15:09:06.287280 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c5db86d6-4vr9m" podUID="7b333bcc-6bcd-4c30-b1c0-19548616455c" Nov 5 15:09:06.325539 kubelet[3589]: E1105 15:09:06.325477 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" podUID="c1f198dc-261d-4ac5-8860-91734a6c009d" Nov 5 15:09:08.325383 kubelet[3589]: E1105 15:09:08.323544 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" podUID="918f7e6e-ae2a-455d-8758-01b9af03afc5" Nov 5 15:09:08.326109 kubelet[3589]: E1105 15:09:08.326078 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" podUID="38f05b09-e539-4b0f-aa00-1af242dcf380" Nov 5 15:09:08.478689 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:40076.service - OpenSSH per-connection server daemon (10.200.16.10:40076). Nov 5 15:09:08.940548 sshd[5918]: Accepted publickey for core from 10.200.16.10 port 40076 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:09:08.941664 sshd-session[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:09:08.945962 systemd-logind[2029]: New session 23 of user core. Nov 5 15:09:08.955540 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:09:09.331225 sshd[5921]: Connection closed by 10.200.16.10 port 40076 Nov 5 15:09:09.331851 sshd-session[5918]: pam_unix(sshd:session): session closed for user core Nov 5 15:09:09.336812 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:40076.service: Deactivated successfully. Nov 5 15:09:09.336872 systemd-logind[2029]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:09:09.338513 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:09:09.341749 systemd-logind[2029]: Removed session 23. Nov 5 15:09:11.325195 kubelet[3589]: E1105 15:09:11.324845 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" podUID="5f8444f4-0b9f-4af6-a67a-d71e5d4f1309" Nov 5 15:09:12.500306 containerd[2056]: time="2025-11-05T15:09:12.500259597Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20607295be6c34d0e78530a06ddff2fadef7a4d6779a246f9622052b0d6bb9b9\" id:\"ced3fc431edc40b6acdc8bd1ea7bc8c4edcfeed5e82b30d27254cb001ccb27ec\" pid:5946 exited_at:{seconds:1762355352 nanos:499883012}" Nov 5 15:09:14.325189 kubelet[3589]: E1105 15:09:14.325063 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lg4lc" podUID="460d4776-c8b3-4dec-911d-f1ebdf0cfa3b" Nov 5 15:09:14.326704 containerd[2056]: time="2025-11-05T15:09:14.325108667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:09:14.414664 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:60348.service - OpenSSH per-connection server daemon (10.200.16.10:60348). Nov 5 15:09:14.600504 containerd[2056]: time="2025-11-05T15:09:14.600378888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:09:14.604046 containerd[2056]: time="2025-11-05T15:09:14.603929405Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:09:14.604046 containerd[2056]: time="2025-11-05T15:09:14.604015687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:09:14.604470 kubelet[3589]: E1105 15:09:14.604337 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:09:14.604470 kubelet[3589]: E1105 15:09:14.604423 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:09:14.604637 kubelet[3589]: E1105 15:09:14.604619 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-nr498_calico-system(5175072d-c97d-4e97-bbe9-4eb6c98f1e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:09:14.605401 containerd[2056]: time="2025-11-05T15:09:14.605383645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:09:14.876991 sshd[5975]: Accepted publickey for core from 10.200.16.10 port 60348 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:09:14.878881 sshd-session[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:09:14.884261 systemd-logind[2029]: New session 24 of user core. Nov 5 15:09:14.885223 containerd[2056]: time="2025-11-05T15:09:14.885188637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:09:14.888268 containerd[2056]: time="2025-11-05T15:09:14.888229791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:09:14.888524 containerd[2056]: time="2025-11-05T15:09:14.888306793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:09:14.888517 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:09:14.890415 kubelet[3589]: E1105 15:09:14.888869 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:09:14.890415 kubelet[3589]: E1105 15:09:14.889870 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:09:14.890415 kubelet[3589]: E1105 15:09:14.889958 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-nr498_calico-system(5175072d-c97d-4e97-bbe9-4eb6c98f1e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:09:14.890551 kubelet[3589]: E1105 15:09:14.889989 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:09:15.249130 sshd[5978]: Connection closed by 10.200.16.10 port 60348 Nov 5 15:09:15.249715 sshd-session[5975]: pam_unix(sshd:session): session closed for user core Nov 5 15:09:15.253216 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:60348.service: Deactivated successfully. Nov 5 15:09:15.256679 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:09:15.259972 systemd-logind[2029]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:09:15.261114 systemd-logind[2029]: Removed session 24. Nov 5 15:09:19.324859 containerd[2056]: time="2025-11-05T15:09:19.324609959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:09:19.326895 kubelet[3589]: E1105 15:09:19.325562 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c5db86d6-4vr9m" podUID="7b333bcc-6bcd-4c30-b1c0-19548616455c" Nov 5 15:09:19.674819 containerd[2056]: time="2025-11-05T15:09:19.673543738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:09:19.677059 containerd[2056]: time="2025-11-05T15:09:19.677019245Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:09:19.677059 containerd[2056]: time="2025-11-05T15:09:19.677082919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:09:19.680301 kubelet[3589]: E1105 15:09:19.680262 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:09:19.681968 kubelet[3589]: E1105 15:09:19.680473 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:09:19.681968 kubelet[3589]: E1105 15:09:19.680646 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-65d654bb8-d6fmx_calico-apiserver(c1f198dc-261d-4ac5-8860-91734a6c009d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:09:19.681968 kubelet[3589]: E1105 15:09:19.680673 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" podUID="c1f198dc-261d-4ac5-8860-91734a6c009d" Nov 5 15:09:19.682081 containerd[2056]: time="2025-11-05T15:09:19.681627922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:09:20.091851 containerd[2056]: time="2025-11-05T15:09:20.091659543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:09:20.097015 containerd[2056]: time="2025-11-05T15:09:20.096916834Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:09:20.097015 containerd[2056]: time="2025-11-05T15:09:20.096970867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:09:20.097208 kubelet[3589]: E1105 15:09:20.097163 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:09:20.097248 kubelet[3589]: E1105 15:09:20.097219 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:09:20.097329 kubelet[3589]: E1105 15:09:20.097307 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c94bc65c5-xbp7r_calico-apiserver(38f05b09-e539-4b0f-aa00-1af242dcf380): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:09:20.097370 kubelet[3589]: E1105 15:09:20.097339 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" podUID="38f05b09-e539-4b0f-aa00-1af242dcf380" Nov 5 15:09:20.329804 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:49238.service - OpenSSH per-connection server daemon (10.200.16.10:49238). Nov 5 15:09:20.747097 sshd[5990]: Accepted publickey for core from 10.200.16.10 port 49238 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:09:20.748216 sshd-session[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:09:20.752375 systemd-logind[2029]: New session 25 of user core. Nov 5 15:09:20.758615 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 15:09:21.127870 sshd[5993]: Connection closed by 10.200.16.10 port 49238 Nov 5 15:09:21.128587 sshd-session[5990]: pam_unix(sshd:session): session closed for user core Nov 5 15:09:21.134052 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:49238.service: Deactivated successfully. Nov 5 15:09:21.136457 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 15:09:21.139994 systemd-logind[2029]: Session 25 logged out. Waiting for processes to exit. Nov 5 15:09:21.141046 systemd-logind[2029]: Removed session 25. Nov 5 15:09:21.325484 containerd[2056]: time="2025-11-05T15:09:21.325423374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:09:21.589131 containerd[2056]: time="2025-11-05T15:09:21.589080235Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:09:21.592439 containerd[2056]: time="2025-11-05T15:09:21.592273038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:09:21.592562 containerd[2056]: time="2025-11-05T15:09:21.592321575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:09:21.592762 kubelet[3589]: E1105 15:09:21.592725 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:09:21.593376 kubelet[3589]: E1105 15:09:21.593108 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:09:21.593376 kubelet[3589]: E1105 15:09:21.593208 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-84577cbdbb-g8xvq_calico-system(918f7e6e-ae2a-455d-8758-01b9af03afc5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:09:21.593500 kubelet[3589]: E1105 15:09:21.593366 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84577cbdbb-g8xvq" podUID="918f7e6e-ae2a-455d-8758-01b9af03afc5" Nov 5 15:09:23.324230 containerd[2056]: time="2025-11-05T15:09:23.323338294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:09:23.603959 containerd[2056]: time="2025-11-05T15:09:23.603798155Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:09:23.607856 containerd[2056]: time="2025-11-05T15:09:23.607804438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:09:23.607986 containerd[2056]: time="2025-11-05T15:09:23.607904320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:09:23.608193 kubelet[3589]: E1105 15:09:23.608148 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:09:23.608542 kubelet[3589]: E1105 15:09:23.608197 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:09:23.608542 kubelet[3589]: E1105 15:09:23.608456 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7c94bc65c5-b4vch_calico-apiserver(5f8444f4-0b9f-4af6-a67a-d71e5d4f1309): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:09:23.608680 kubelet[3589]: E1105 15:09:23.608485 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-b4vch" podUID="5f8444f4-0b9f-4af6-a67a-d71e5d4f1309" Nov 5 15:09:25.324098 containerd[2056]: time="2025-11-05T15:09:25.324038845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:09:25.671614 containerd[2056]: time="2025-11-05T15:09:25.671349153Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:09:25.674498 containerd[2056]: time="2025-11-05T15:09:25.674373922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:09:25.674498 containerd[2056]: time="2025-11-05T15:09:25.674467940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:09:25.674619 kubelet[3589]: E1105 15:09:25.674587 3589 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:09:25.674874 kubelet[3589]: E1105 15:09:25.674630 3589 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:09:25.674874 kubelet[3589]: E1105 15:09:25.674692 3589 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-lg4lc_calico-system(460d4776-c8b3-4dec-911d-f1ebdf0cfa3b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:09:25.674874 kubelet[3589]: E1105 15:09:25.674715 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-lg4lc" podUID="460d4776-c8b3-4dec-911d-f1ebdf0cfa3b" Nov 5 15:09:26.214204 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:49242.service - OpenSSH per-connection server daemon (10.200.16.10:49242). Nov 5 15:09:26.683112 sshd[6013]: Accepted publickey for core from 10.200.16.10 port 49242 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:09:26.684252 sshd-session[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:09:26.688525 systemd-logind[2029]: New session 26 of user core. Nov 5 15:09:26.695694 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 15:09:27.074391 sshd[6016]: Connection closed by 10.200.16.10 port 49242 Nov 5 15:09:27.075918 sshd-session[6013]: pam_unix(sshd:session): session closed for user core Nov 5 15:09:27.080125 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:49242.service: Deactivated successfully. Nov 5 15:09:27.083979 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 15:09:27.085456 systemd-logind[2029]: Session 26 logged out. Waiting for processes to exit. Nov 5 15:09:27.086804 systemd-logind[2029]: Removed session 26. Nov 5 15:09:28.330659 kubelet[3589]: E1105 15:09:28.330059 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-nr498" podUID="5175072d-c97d-4e97-bbe9-4eb6c98f1e6a" Nov 5 15:09:31.324643 kubelet[3589]: E1105 15:09:31.324330 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65d654bb8-d6fmx" podUID="c1f198dc-261d-4ac5-8860-91734a6c009d" Nov 5 15:09:32.155557 systemd[1]: Started sshd@24-10.200.20.11:22-10.200.16.10:55526.service - OpenSSH per-connection server daemon (10.200.16.10:55526). Nov 5 15:09:32.611516 sshd[6029]: Accepted publickey for core from 10.200.16.10 port 55526 ssh2: RSA SHA256:mGUAnMJC54q9ii6P+9FPV0TJpSBkn3Z8kncSeRZ8Yxo Nov 5 15:09:32.612644 sshd-session[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:09:32.616984 systemd-logind[2029]: New session 27 of user core. Nov 5 15:09:32.621492 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 5 15:09:32.980172 sshd[6032]: Connection closed by 10.200.16.10 port 55526 Nov 5 15:09:32.980002 sshd-session[6029]: pam_unix(sshd:session): session closed for user core Nov 5 15:09:32.983774 systemd-logind[2029]: Session 27 logged out. Waiting for processes to exit. Nov 5 15:09:32.984697 systemd[1]: sshd@24-10.200.20.11:22-10.200.16.10:55526.service: Deactivated successfully. Nov 5 15:09:32.986801 systemd[1]: session-27.scope: Deactivated successfully. Nov 5 15:09:32.988862 systemd-logind[2029]: Removed session 27. Nov 5 15:09:33.323014 kubelet[3589]: E1105 15:09:33.322940 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c94bc65c5-xbp7r" podUID="38f05b09-e539-4b0f-aa00-1af242dcf380" Nov 5 15:09:34.327428 kubelet[3589]: E1105 15:09:34.327381 3589 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66c5db86d6-4vr9m" podUID="7b333bcc-6bcd-4c30-b1c0-19548616455c"