Nov 23 23:05:03.779618 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 23 23:05:03.779641 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:53:53 -00 2025 Nov 23 23:05:03.779650 kernel: KASLR enabled Nov 23 23:05:03.779656 kernel: efi: EFI v2.7 by EDK II Nov 23 23:05:03.779661 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Nov 23 23:05:03.779667 kernel: random: crng init done Nov 23 23:05:03.779674 kernel: secureboot: Secure boot disabled Nov 23 23:05:03.779680 kernel: ACPI: Early table checksum verification disabled Nov 23 23:05:03.779685 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Nov 23 23:05:03.779692 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 23 23:05:03.779699 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:05:03.779704 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:05:03.779710 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:05:03.779716 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:05:03.779723 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:05:03.779730 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:05:03.779736 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:05:03.779742 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:05:03.779777 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:05:03.779786 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 23 23:05:03.779792 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 23:05:03.779798 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:05:03.779804 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Nov 23 23:05:03.779810 kernel: Zone ranges: Nov 23 23:05:03.779816 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:05:03.779825 kernel: DMA32 empty Nov 23 23:05:03.779844 kernel: Normal empty Nov 23 23:05:03.779850 kernel: Device empty Nov 23 23:05:03.779856 kernel: Movable zone start for each node Nov 23 23:05:03.779862 kernel: Early memory node ranges Nov 23 23:05:03.779868 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Nov 23 23:05:03.779874 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Nov 23 23:05:03.779880 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Nov 23 23:05:03.779886 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Nov 23 23:05:03.779892 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Nov 23 23:05:03.779898 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Nov 23 23:05:03.779904 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Nov 23 23:05:03.779911 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Nov 23 23:05:03.779917 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Nov 23 23:05:03.779924 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 23 23:05:03.779932 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 23 23:05:03.779939 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 23 23:05:03.779945 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 23 23:05:03.779953 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 23 23:05:03.779959 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 23 23:05:03.779965 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Nov 23 23:05:03.779972 kernel: psci: probing for conduit method from ACPI. Nov 23 23:05:03.779978 kernel: psci: PSCIv1.1 detected in firmware. Nov 23 23:05:03.779984 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 23:05:03.779990 kernel: psci: Trusted OS migration not required Nov 23 23:05:03.779997 kernel: psci: SMC Calling Convention v1.1 Nov 23 23:05:03.780003 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 23 23:05:03.780010 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 23:05:03.780017 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 23:05:03.780024 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 23 23:05:03.780030 kernel: Detected PIPT I-cache on CPU0 Nov 23 23:05:03.780036 kernel: CPU features: detected: GIC system register CPU interface Nov 23 23:05:03.780043 kernel: CPU features: detected: Spectre-v4 Nov 23 23:05:03.780049 kernel: CPU features: detected: Spectre-BHB Nov 23 23:05:03.780055 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 23 23:05:03.780062 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 23 23:05:03.780068 kernel: CPU features: detected: ARM erratum 1418040 Nov 23 23:05:03.780074 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 23 23:05:03.780081 kernel: alternatives: applying boot alternatives Nov 23 23:05:03.780088 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:05:03.780096 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 23:05:03.780103 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 23:05:03.780109 kernel: Fallback order for Node 0: 0 Nov 23 23:05:03.780115 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 23 23:05:03.780122 kernel: Policy zone: DMA Nov 23 23:05:03.780128 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 23:05:03.780134 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 23 23:05:03.780141 kernel: software IO TLB: area num 4. Nov 23 23:05:03.780147 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 23 23:05:03.780154 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Nov 23 23:05:03.780160 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 23 23:05:03.780168 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 23:05:03.780175 kernel: rcu: RCU event tracing is enabled. Nov 23 23:05:03.780181 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 23 23:05:03.780188 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 23:05:03.780194 kernel: Tracing variant of Tasks RCU enabled. Nov 23 23:05:03.780201 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 23:05:03.780213 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 23 23:05:03.780219 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 23 23:05:03.780226 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 23 23:05:03.780232 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 23:05:03.780239 kernel: GICv3: 256 SPIs implemented Nov 23 23:05:03.780246 kernel: GICv3: 0 Extended SPIs implemented Nov 23 23:05:03.780253 kernel: Root IRQ handler: gic_handle_irq Nov 23 23:05:03.780259 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 23 23:05:03.780265 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 23:05:03.780272 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 23 23:05:03.780278 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 23 23:05:03.780284 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 23 23:05:03.780291 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 23 23:05:03.780297 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 23 23:05:03.780304 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 23 23:05:03.780310 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 23:05:03.780317 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:05:03.780324 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 23 23:05:03.780331 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 23 23:05:03.780338 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 23 23:05:03.780344 kernel: arm-pv: using stolen time PV Nov 23 23:05:03.780351 kernel: Console: colour dummy device 80x25 Nov 23 23:05:03.780357 kernel: ACPI: Core revision 20240827 Nov 23 23:05:03.780364 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 23 23:05:03.780371 kernel: pid_max: default: 32768 minimum: 301 Nov 23 23:05:03.780377 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 23:05:03.780384 kernel: landlock: Up and running. Nov 23 23:05:03.780391 kernel: SELinux: Initializing. Nov 23 23:05:03.780398 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:05:03.780404 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:05:03.780411 kernel: rcu: Hierarchical SRCU implementation. Nov 23 23:05:03.780418 kernel: rcu: Max phase no-delay instances is 400. Nov 23 23:05:03.780425 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 23:05:03.780432 kernel: Remapping and enabling EFI services. Nov 23 23:05:03.780439 kernel: smp: Bringing up secondary CPUs ... Nov 23 23:05:03.780446 kernel: Detected PIPT I-cache on CPU1 Nov 23 23:05:03.780458 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 23 23:05:03.780465 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 23 23:05:03.780472 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:05:03.780480 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 23 23:05:03.780548 kernel: Detected PIPT I-cache on CPU2 Nov 23 23:05:03.780558 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 23 23:05:03.780565 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 23 23:05:03.780572 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:05:03.780582 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 23 23:05:03.780589 kernel: Detected PIPT I-cache on CPU3 Nov 23 23:05:03.780595 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 23 23:05:03.780603 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 23 23:05:03.780610 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:05:03.780616 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 23 23:05:03.780623 kernel: smp: Brought up 1 node, 4 CPUs Nov 23 23:05:03.780630 kernel: SMP: Total of 4 processors activated. Nov 23 23:05:03.780637 kernel: CPU: All CPU(s) started at EL1 Nov 23 23:05:03.780645 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 23:05:03.780652 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 23 23:05:03.780659 kernel: CPU features: detected: Common not Private translations Nov 23 23:05:03.780666 kernel: CPU features: detected: CRC32 instructions Nov 23 23:05:03.780673 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 23 23:05:03.780680 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 23 23:05:03.780686 kernel: CPU features: detected: LSE atomic instructions Nov 23 23:05:03.780693 kernel: CPU features: detected: Privileged Access Never Nov 23 23:05:03.780700 kernel: CPU features: detected: RAS Extension Support Nov 23 23:05:03.780708 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 23 23:05:03.780715 kernel: alternatives: applying system-wide alternatives Nov 23 23:05:03.780722 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 23 23:05:03.780730 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Nov 23 23:05:03.780737 kernel: devtmpfs: initialized Nov 23 23:05:03.780744 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 23:05:03.780769 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 23 23:05:03.780776 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 23 23:05:03.780783 kernel: 0 pages in range for non-PLT usage Nov 23 23:05:03.780794 kernel: 508400 pages in range for PLT usage Nov 23 23:05:03.780801 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 23:05:03.780808 kernel: SMBIOS 3.0.0 present. Nov 23 23:05:03.780815 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 23 23:05:03.780822 kernel: DMI: Memory slots populated: 1/1 Nov 23 23:05:03.780829 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 23:05:03.780836 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 23:05:03.780843 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 23:05:03.780850 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 23:05:03.780859 kernel: audit: initializing netlink subsys (disabled) Nov 23 23:05:03.780866 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Nov 23 23:05:03.780873 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 23:05:03.780880 kernel: cpuidle: using governor menu Nov 23 23:05:03.780887 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 23:05:03.780894 kernel: ASID allocator initialised with 32768 entries Nov 23 23:05:03.780901 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 23:05:03.780908 kernel: Serial: AMBA PL011 UART driver Nov 23 23:05:03.780915 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 23:05:03.780923 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 23:05:03.780931 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 23:05:03.780938 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 23:05:03.780945 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 23:05:03.780952 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 23:05:03.780959 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 23:05:03.780970 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 23:05:03.780978 kernel: ACPI: Added _OSI(Module Device) Nov 23 23:05:03.780989 kernel: ACPI: Added _OSI(Processor Device) Nov 23 23:05:03.781001 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 23:05:03.781009 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 23:05:03.781018 kernel: ACPI: Interpreter enabled Nov 23 23:05:03.781025 kernel: ACPI: Using GIC for interrupt routing Nov 23 23:05:03.781032 kernel: ACPI: MCFG table detected, 1 entries Nov 23 23:05:03.781039 kernel: ACPI: CPU0 has been hot-added Nov 23 23:05:03.781046 kernel: ACPI: CPU1 has been hot-added Nov 23 23:05:03.781053 kernel: ACPI: CPU2 has been hot-added Nov 23 23:05:03.781060 kernel: ACPI: CPU3 has been hot-added Nov 23 23:05:03.781067 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 23 23:05:03.781075 kernel: printk: legacy console [ttyAMA0] enabled Nov 23 23:05:03.781082 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 23 23:05:03.781236 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 23:05:03.781308 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 23:05:03.781374 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 23:05:03.781433 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 23 23:05:03.781491 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 23 23:05:03.781502 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 23 23:05:03.781509 kernel: PCI host bridge to bus 0000:00 Nov 23 23:05:03.781579 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 23 23:05:03.781634 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 23:05:03.781696 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 23 23:05:03.781763 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 23 23:05:03.781845 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 23 23:05:03.781921 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 23 23:05:03.781983 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 23 23:05:03.782044 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 23 23:05:03.782104 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 23 23:05:03.782165 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 23 23:05:03.782238 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 23 23:05:03.782302 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 23 23:05:03.782358 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 23 23:05:03.782412 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 23:05:03.782466 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 23 23:05:03.782475 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 23:05:03.782482 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 23:05:03.782489 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 23:05:03.782497 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 23:05:03.782506 kernel: iommu: Default domain type: Translated Nov 23 23:05:03.782514 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 23:05:03.782521 kernel: efivars: Registered efivars operations Nov 23 23:05:03.782528 kernel: vgaarb: loaded Nov 23 23:05:03.782535 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 23:05:03.782542 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 23:05:03.782549 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 23:05:03.782556 kernel: pnp: PnP ACPI init Nov 23 23:05:03.782631 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 23 23:05:03.782643 kernel: pnp: PnP ACPI: found 1 devices Nov 23 23:05:03.782650 kernel: NET: Registered PF_INET protocol family Nov 23 23:05:03.782658 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 23:05:03.782665 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 23:05:03.782672 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 23:05:03.782680 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 23:05:03.782687 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 23:05:03.782695 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 23:05:03.782703 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:05:03.782711 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:05:03.782718 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 23:05:03.782725 kernel: PCI: CLS 0 bytes, default 64 Nov 23 23:05:03.782732 kernel: kvm [1]: HYP mode not available Nov 23 23:05:03.782739 kernel: Initialise system trusted keyrings Nov 23 23:05:03.782746 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 23:05:03.782763 kernel: Key type asymmetric registered Nov 23 23:05:03.782770 kernel: Asymmetric key parser 'x509' registered Nov 23 23:05:03.782779 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 23:05:03.782798 kernel: io scheduler mq-deadline registered Nov 23 23:05:03.782805 kernel: io scheduler kyber registered Nov 23 23:05:03.782812 kernel: io scheduler bfq registered Nov 23 23:05:03.782820 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 23:05:03.782827 kernel: ACPI: button: Power Button [PWRB] Nov 23 23:05:03.782834 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 23:05:03.782901 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 23 23:05:03.782911 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 23:05:03.782919 kernel: thunder_xcv, ver 1.0 Nov 23 23:05:03.782927 kernel: thunder_bgx, ver 1.0 Nov 23 23:05:03.782950 kernel: nicpf, ver 1.0 Nov 23 23:05:03.782959 kernel: nicvf, ver 1.0 Nov 23 23:05:03.783039 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 23:05:03.783098 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T23:05:03 UTC (1763939103) Nov 23 23:05:03.783108 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 23:05:03.783115 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 23 23:05:03.783124 kernel: watchdog: NMI not fully supported Nov 23 23:05:03.783132 kernel: watchdog: Hard watchdog permanently disabled Nov 23 23:05:03.783139 kernel: NET: Registered PF_INET6 protocol family Nov 23 23:05:03.783146 kernel: Segment Routing with IPv6 Nov 23 23:05:03.783153 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 23:05:03.783160 kernel: NET: Registered PF_PACKET protocol family Nov 23 23:05:03.783167 kernel: Key type dns_resolver registered Nov 23 23:05:03.783174 kernel: registered taskstats version 1 Nov 23 23:05:03.783181 kernel: Loading compiled-in X.509 certificates Nov 23 23:05:03.783188 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 00c36da29593053a7da9cd3c5945ae69451ce339' Nov 23 23:05:03.783197 kernel: Demotion targets for Node 0: null Nov 23 23:05:03.783211 kernel: Key type .fscrypt registered Nov 23 23:05:03.783220 kernel: Key type fscrypt-provisioning registered Nov 23 23:05:03.783227 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 23:05:03.783234 kernel: ima: Allocated hash algorithm: sha1 Nov 23 23:05:03.783242 kernel: ima: No architecture policies found Nov 23 23:05:03.783249 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 23:05:03.783256 kernel: clk: Disabling unused clocks Nov 23 23:05:03.783278 kernel: PM: genpd: Disabling unused power domains Nov 23 23:05:03.783287 kernel: Warning: unable to open an initial console. Nov 23 23:05:03.783295 kernel: Freeing unused kernel memory: 39552K Nov 23 23:05:03.783303 kernel: Run /init as init process Nov 23 23:05:03.783310 kernel: with arguments: Nov 23 23:05:03.783317 kernel: /init Nov 23 23:05:03.783324 kernel: with environment: Nov 23 23:05:03.783330 kernel: HOME=/ Nov 23 23:05:03.783337 kernel: TERM=linux Nov 23 23:05:03.783345 systemd[1]: Successfully made /usr/ read-only. Nov 23 23:05:03.783357 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:05:03.783365 systemd[1]: Detected virtualization kvm. Nov 23 23:05:03.783373 systemd[1]: Detected architecture arm64. Nov 23 23:05:03.783380 systemd[1]: Running in initrd. Nov 23 23:05:03.783388 systemd[1]: No hostname configured, using default hostname. Nov 23 23:05:03.783395 systemd[1]: Hostname set to . Nov 23 23:05:03.783403 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:05:03.783412 systemd[1]: Queued start job for default target initrd.target. Nov 23 23:05:03.783420 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:05:03.783428 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:05:03.783436 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 23:05:03.783444 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:05:03.783452 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 23:05:03.783460 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 23:05:03.783470 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 23:05:03.783477 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 23:05:03.783485 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:05:03.783493 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:05:03.783500 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:05:03.783507 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:05:03.783515 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:05:03.783523 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:05:03.783531 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:05:03.783539 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:05:03.783546 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 23:05:03.783554 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 23:05:03.783561 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:05:03.783569 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:05:03.783577 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:05:03.783584 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:05:03.783593 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 23:05:03.783601 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:05:03.783608 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 23:05:03.783616 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 23:05:03.783624 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 23:05:03.783632 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:05:03.783639 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:05:03.783647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:05:03.783654 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:05:03.783663 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 23:05:03.783671 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 23:05:03.783679 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:05:03.783686 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:05:03.783714 systemd-journald[245]: Collecting audit messages is disabled. Nov 23 23:05:03.783733 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 23:05:03.783741 kernel: Bridge firewalling registered Nov 23 23:05:03.783764 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 23:05:03.783777 systemd-journald[245]: Journal started Nov 23 23:05:03.783797 systemd-journald[245]: Runtime Journal (/run/log/journal/dae6ca0627e74d30bc03e122a9868373) is 6M, max 48.5M, 42.4M free. Nov 23 23:05:03.766998 systemd-modules-load[248]: Inserted module 'overlay' Nov 23 23:05:03.785856 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:05:03.781624 systemd-modules-load[248]: Inserted module 'br_netfilter' Nov 23 23:05:03.795927 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:05:03.797373 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:05:03.801656 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:05:03.805304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:05:03.811395 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:05:03.813267 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:05:03.816187 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 23:05:03.820462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:05:03.822498 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:05:03.823747 systemd-tmpfiles[284]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 23:05:03.826652 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:05:03.829528 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:05:03.838584 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:05:03.869741 systemd-resolved[295]: Positive Trust Anchors: Nov 23 23:05:03.869823 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:05:03.869855 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:05:03.874635 systemd-resolved[295]: Defaulting to hostname 'linux'. Nov 23 23:05:03.875702 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:05:03.878806 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:05:03.920781 kernel: SCSI subsystem initialized Nov 23 23:05:03.925788 kernel: Loading iSCSI transport class v2.0-870. Nov 23 23:05:03.933774 kernel: iscsi: registered transport (tcp) Nov 23 23:05:03.946788 kernel: iscsi: registered transport (qla4xxx) Nov 23 23:05:03.946809 kernel: QLogic iSCSI HBA Driver Nov 23 23:05:03.963820 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:05:03.983857 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:05:03.986499 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:05:04.034821 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 23:05:04.036827 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 23:05:04.100785 kernel: raid6: neonx8 gen() 15682 MB/s Nov 23 23:05:04.117775 kernel: raid6: neonx4 gen() 15785 MB/s Nov 23 23:05:04.134774 kernel: raid6: neonx2 gen() 13207 MB/s Nov 23 23:05:04.151774 kernel: raid6: neonx1 gen() 10204 MB/s Nov 23 23:05:04.168779 kernel: raid6: int64x8 gen() 6750 MB/s Nov 23 23:05:04.185773 kernel: raid6: int64x4 gen() 7189 MB/s Nov 23 23:05:04.202771 kernel: raid6: int64x2 gen() 6092 MB/s Nov 23 23:05:04.219798 kernel: raid6: int64x1 gen() 5046 MB/s Nov 23 23:05:04.219828 kernel: raid6: using algorithm neonx4 gen() 15785 MB/s Nov 23 23:05:04.237898 kernel: raid6: .... xor() 12348 MB/s, rmw enabled Nov 23 23:05:04.237923 kernel: raid6: using neon recovery algorithm Nov 23 23:05:04.243961 kernel: xor: measuring software checksum speed Nov 23 23:05:04.243989 kernel: 8regs : 21636 MB/sec Nov 23 23:05:04.243999 kernel: 32regs : 21687 MB/sec Nov 23 23:05:04.245111 kernel: arm64_neon : 28032 MB/sec Nov 23 23:05:04.245129 kernel: xor: using function: arm64_neon (28032 MB/sec) Nov 23 23:05:04.297795 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 23:05:04.304519 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:05:04.307023 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:05:04.339946 systemd-udevd[501]: Using default interface naming scheme 'v255'. Nov 23 23:05:04.345669 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:05:04.347645 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 23:05:04.372969 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Nov 23 23:05:04.396633 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:05:04.399105 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:05:04.458401 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:05:04.460467 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 23:05:04.525480 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 23 23:05:04.525680 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 23 23:05:04.529237 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:05:04.532755 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 23:05:04.532781 kernel: GPT:9289727 != 19775487 Nov 23 23:05:04.532791 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 23:05:04.532800 kernel: GPT:9289727 != 19775487 Nov 23 23:05:04.532811 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 23:05:04.532820 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:05:04.529364 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:05:04.534977 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:05:04.537168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:05:04.560382 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 23 23:05:04.569643 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 23 23:05:04.572284 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 23:05:04.573675 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:05:04.590660 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 23 23:05:04.592001 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 23 23:05:04.600802 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 23 23:05:04.602012 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:05:04.604149 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:05:04.606410 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:05:04.609373 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 23:05:04.611283 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 23:05:04.634791 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:05:04.634857 disk-uuid[591]: Primary Header is updated. Nov 23 23:05:04.634857 disk-uuid[591]: Secondary Entries is updated. Nov 23 23:05:04.634857 disk-uuid[591]: Secondary Header is updated. Nov 23 23:05:04.634810 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:05:05.650700 disk-uuid[598]: The operation has completed successfully. Nov 23 23:05:05.651939 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 23 23:05:05.681308 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 23:05:05.681430 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 23:05:05.703989 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 23:05:05.722849 sh[611]: Success Nov 23 23:05:05.736395 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 23:05:05.736448 kernel: device-mapper: uevent: version 1.0.3 Nov 23 23:05:05.736460 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 23:05:05.744769 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 23:05:05.773531 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 23:05:05.775405 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 23:05:05.791030 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 23:05:05.798790 kernel: BTRFS: device fsid 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (623) Nov 23 23:05:05.801783 kernel: BTRFS info (device dm-0): first mount of filesystem 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 Nov 23 23:05:05.801835 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:05:05.806237 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 23:05:05.806296 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 23:05:05.807430 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 23:05:05.808719 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:05:05.809971 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 23:05:05.810737 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 23:05:05.812502 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 23:05:05.834789 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Nov 23 23:05:05.837540 kernel: BTRFS info (device vda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:05:05.837596 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:05:05.841047 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:05:05.841124 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:05:05.845786 kernel: BTRFS info (device vda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:05:05.847870 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 23:05:05.851923 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 23:05:05.923392 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:05:05.928501 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:05:05.946695 ignition[704]: Ignition 2.22.0 Nov 23 23:05:05.946713 ignition[704]: Stage: fetch-offline Nov 23 23:05:05.946757 ignition[704]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:05:05.946766 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:05:05.946858 ignition[704]: parsed url from cmdline: "" Nov 23 23:05:05.946862 ignition[704]: no config URL provided Nov 23 23:05:05.946866 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:05:05.946872 ignition[704]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:05:05.946893 ignition[704]: op(1): [started] loading QEMU firmware config module Nov 23 23:05:05.946898 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 23 23:05:05.952142 ignition[704]: op(1): [finished] loading QEMU firmware config module Nov 23 23:05:05.976463 systemd-networkd[805]: lo: Link UP Nov 23 23:05:05.977429 systemd-networkd[805]: lo: Gained carrier Nov 23 23:05:05.978958 systemd-networkd[805]: Enumeration completed Nov 23 23:05:05.979084 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:05:05.981662 systemd-networkd[805]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:05:05.981666 systemd-networkd[805]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:05:05.982015 systemd[1]: Reached target network.target - Network. Nov 23 23:05:05.982664 systemd-networkd[805]: eth0: Link UP Nov 23 23:05:05.982758 systemd-networkd[805]: eth0: Gained carrier Nov 23 23:05:05.982767 systemd-networkd[805]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:05:05.998803 systemd-networkd[805]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 23 23:05:06.007249 ignition[704]: parsing config with SHA512: 20e95573b9ba37ecb88814d69562d4c3b8bda4a458f5923fd6ae558e7335bae3718013b3fc75671a17f1228526c3fe04cf4ff0d4794e6995603d8ffe906f59ef Nov 23 23:05:06.012994 unknown[704]: fetched base config from "system" Nov 23 23:05:06.013771 unknown[704]: fetched user config from "qemu" Nov 23 23:05:06.014136 ignition[704]: fetch-offline: fetch-offline passed Nov 23 23:05:06.014236 ignition[704]: Ignition finished successfully Nov 23 23:05:06.016489 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:05:06.017798 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 23 23:05:06.018629 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 23:05:06.055885 ignition[812]: Ignition 2.22.0 Nov 23 23:05:06.055900 ignition[812]: Stage: kargs Nov 23 23:05:06.056042 ignition[812]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:05:06.056052 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:05:06.056809 ignition[812]: kargs: kargs passed Nov 23 23:05:06.056856 ignition[812]: Ignition finished successfully Nov 23 23:05:06.059628 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 23:05:06.062882 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 23:05:06.104227 ignition[820]: Ignition 2.22.0 Nov 23 23:05:06.104239 ignition[820]: Stage: disks Nov 23 23:05:06.104369 ignition[820]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:05:06.107337 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 23:05:06.104377 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:05:06.108468 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 23:05:06.105108 ignition[820]: disks: disks passed Nov 23 23:05:06.109978 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 23:05:06.105151 ignition[820]: Ignition finished successfully Nov 23 23:05:06.111885 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:05:06.113706 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:05:06.115058 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:05:06.117476 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 23:05:06.149773 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 23 23:05:06.206493 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 23:05:06.208592 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 23:05:06.269763 kernel: EXT4-fs (vda9): mounted filesystem fa3f8731-d4e3-4e51-b6db-fa404206cf07 r/w with ordered data mode. Quota mode: none. Nov 23 23:05:06.270294 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 23:05:06.271419 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 23:05:06.273594 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:05:06.275200 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 23:05:06.276110 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 23 23:05:06.276148 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 23:05:06.276169 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:05:06.289609 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 23:05:06.293882 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Nov 23 23:05:06.291908 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 23:05:06.297099 kernel: BTRFS info (device vda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:05:06.297118 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:05:06.297127 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:05:06.298273 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:05:06.299655 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:05:06.331488 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 23:05:06.335867 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Nov 23 23:05:06.339652 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 23:05:06.342529 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 23:05:06.411455 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 23:05:06.413472 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 23:05:06.416613 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 23:05:06.434787 kernel: BTRFS info (device vda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:05:06.449891 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 23:05:06.466033 ignition[952]: INFO : Ignition 2.22.0 Nov 23 23:05:06.466033 ignition[952]: INFO : Stage: mount Nov 23 23:05:06.467547 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:05:06.467547 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:05:06.467547 ignition[952]: INFO : mount: mount passed Nov 23 23:05:06.467547 ignition[952]: INFO : Ignition finished successfully Nov 23 23:05:06.469264 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 23:05:06.471296 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 23:05:06.797962 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 23:05:06.799538 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:05:06.816778 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Nov 23 23:05:06.818771 kernel: BTRFS info (device vda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:05:06.818791 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:05:06.821244 kernel: BTRFS info (device vda6): turning on async discard Nov 23 23:05:06.821265 kernel: BTRFS info (device vda6): enabling free space tree Nov 23 23:05:06.822666 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:05:06.852839 ignition[982]: INFO : Ignition 2.22.0 Nov 23 23:05:06.852839 ignition[982]: INFO : Stage: files Nov 23 23:05:06.854438 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:05:06.854438 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:05:06.854438 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Nov 23 23:05:06.857694 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 23:05:06.857694 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 23:05:06.860633 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 23:05:06.862451 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 23:05:06.862451 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 23:05:06.861480 unknown[982]: wrote ssh authorized keys file for user: core Nov 23 23:05:06.866371 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 23:05:06.866371 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 23 23:05:06.939071 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 23:05:07.052985 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 23:05:07.054951 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 23:05:07.054951 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 23:05:07.054951 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:05:07.054951 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:05:07.054951 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:05:07.054951 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:05:07.054951 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:05:07.054951 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:05:07.122697 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:05:07.124707 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:05:07.124707 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:05:07.132889 systemd-networkd[805]: eth0: Gained IPv6LL Nov 23 23:05:07.137464 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:05:07.137464 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:05:07.141991 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 23 23:05:07.496421 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 23:05:07.789625 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:05:07.789625 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 23:05:07.793607 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:05:07.793607 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:05:07.793607 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 23:05:07.793607 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 23 23:05:07.793607 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 23 23:05:07.793607 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 23 23:05:07.793607 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 23 23:05:07.793607 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 23 23:05:07.816095 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 23 23:05:07.820374 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 23 23:05:07.823276 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 23 23:05:07.823276 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 23 23:05:07.823276 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 23:05:07.823276 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:05:07.823276 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:05:07.823276 ignition[982]: INFO : files: files passed Nov 23 23:05:07.823276 ignition[982]: INFO : Ignition finished successfully Nov 23 23:05:07.823560 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 23:05:07.826823 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 23:05:07.828510 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 23:05:07.845281 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 23:05:07.845394 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 23:05:07.847709 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Nov 23 23:05:07.849902 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:05:07.849902 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:05:07.852588 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:05:07.852817 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:05:07.855222 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 23:05:07.857639 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 23:05:07.895262 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 23:05:07.895369 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 23:05:07.897407 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 23:05:07.899028 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 23:05:07.900619 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 23:05:07.901419 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 23:05:07.939632 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:05:07.942128 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 23:05:07.967039 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:05:07.968270 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:05:07.970178 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 23:05:07.971794 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 23:05:07.971925 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:05:07.974315 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 23:05:07.976261 systemd[1]: Stopped target basic.target - Basic System. Nov 23 23:05:07.977957 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 23:05:07.979672 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:05:07.981825 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 23:05:07.983702 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:05:07.985545 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 23:05:07.987221 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:05:07.988881 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 23:05:07.990770 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 23:05:07.992362 systemd[1]: Stopped target swap.target - Swaps. Nov 23 23:05:07.993682 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 23:05:07.993832 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:05:07.995954 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:05:07.997694 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:05:07.999447 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 23:05:07.999558 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:05:08.001376 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 23:05:08.001497 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 23:05:08.003913 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 23:05:08.004030 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:05:08.005768 systemd[1]: Stopped target paths.target - Path Units. Nov 23 23:05:08.007191 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 23:05:08.007825 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:05:08.008917 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 23:05:08.010469 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 23:05:08.011831 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 23:05:08.011912 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:05:08.013582 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 23:05:08.013660 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:05:08.015778 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 23:05:08.015897 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:05:08.017601 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 23:05:08.017699 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 23:05:08.020016 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 23:05:08.021968 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 23:05:08.023534 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 23:05:08.023646 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:05:08.025490 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 23:05:08.025588 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:05:08.031675 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 23:05:08.031880 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 23:05:08.040321 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 23:05:08.051089 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 23:05:08.051237 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 23:05:08.054443 ignition[1037]: INFO : Ignition 2.22.0 Nov 23 23:05:08.054443 ignition[1037]: INFO : Stage: umount Nov 23 23:05:08.054443 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:05:08.054443 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 23 23:05:08.054443 ignition[1037]: INFO : umount: umount passed Nov 23 23:05:08.054443 ignition[1037]: INFO : Ignition finished successfully Nov 23 23:05:08.057047 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 23:05:08.057189 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 23:05:08.058941 systemd[1]: Stopped target network.target - Network. Nov 23 23:05:08.060252 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 23:05:08.060316 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 23:05:08.061834 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 23:05:08.061879 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 23:05:08.063474 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 23:05:08.063524 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 23:05:08.065068 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 23:05:08.065110 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 23:05:08.066626 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 23:05:08.066676 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 23:05:08.068428 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 23:05:08.069958 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 23:05:08.079173 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 23:05:08.079299 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 23:05:08.082368 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 23:05:08.082611 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 23:05:08.082646 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:05:08.085864 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:05:08.092781 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 23:05:08.092895 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 23:05:08.098825 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 23:05:08.098974 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 23:05:08.100780 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 23:05:08.100811 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:05:08.103308 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 23:05:08.104118 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 23:05:08.104171 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:05:08.106202 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 23:05:08.106247 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:05:08.108701 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 23:05:08.108742 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 23:05:08.110732 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:05:08.114775 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 23:05:08.124438 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 23:05:08.124601 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:05:08.126827 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 23:05:08.126863 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 23:05:08.127842 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 23:05:08.127871 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:05:08.129668 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 23:05:08.129718 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:05:08.132575 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 23:05:08.132623 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 23:05:08.135259 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 23:05:08.135312 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:05:08.138075 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 23:05:08.139179 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 23:05:08.139250 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:05:08.141721 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 23:05:08.141788 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:05:08.144871 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 23 23:05:08.144983 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:05:08.147717 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 23:05:08.147772 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:05:08.149996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:05:08.150045 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:05:08.154715 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 23:05:08.154820 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 23:05:08.156588 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 23:05:08.156679 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 23:05:08.158847 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 23:05:08.160824 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 23:05:08.190927 systemd[1]: Switching root. Nov 23 23:05:08.221436 systemd-journald[245]: Journal stopped Nov 23 23:05:09.145613 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Nov 23 23:05:09.145687 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 23:05:09.145700 kernel: SELinux: policy capability open_perms=1 Nov 23 23:05:09.145710 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 23:05:09.145724 kernel: SELinux: policy capability always_check_network=0 Nov 23 23:05:09.145735 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 23:05:09.145745 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 23:05:09.145771 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 23:05:09.145782 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 23:05:09.145794 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 23:05:09.145811 kernel: audit: type=1403 audit(1763939108.461:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 23:05:09.145822 systemd[1]: Successfully loaded SELinux policy in 59.979ms. Nov 23 23:05:09.145840 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.731ms. Nov 23 23:05:09.145851 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:05:09.145877 systemd[1]: Detected virtualization kvm. Nov 23 23:05:09.145890 systemd[1]: Detected architecture arm64. Nov 23 23:05:09.145901 systemd[1]: Detected first boot. Nov 23 23:05:09.145914 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:05:09.145992 zram_generator::config[1086]: No configuration found. Nov 23 23:05:09.146017 kernel: NET: Registered PF_VSOCK protocol family Nov 23 23:05:09.146028 systemd[1]: Populated /etc with preset unit settings. Nov 23 23:05:09.146044 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 23:05:09.146055 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 23:05:09.146066 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 23:05:09.146076 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 23:05:09.146087 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 23:05:09.146100 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 23:05:09.146111 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 23:05:09.146121 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 23:05:09.146133 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 23:05:09.146144 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 23:05:09.146154 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 23:05:09.146164 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 23:05:09.146176 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:05:09.146201 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:05:09.146212 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 23:05:09.146227 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 23:05:09.146237 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 23:05:09.146248 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:05:09.146262 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 23 23:05:09.146273 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:05:09.146284 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:05:09.146295 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 23:05:09.146306 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 23:05:09.146317 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 23:05:09.146328 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 23:05:09.146339 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:05:09.146350 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:05:09.146361 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:05:09.146372 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:05:09.146383 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 23:05:09.146396 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 23:05:09.146407 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 23:05:09.146463 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:05:09.146500 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:05:09.146511 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:05:09.146523 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 23:05:09.146534 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 23:05:09.146547 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 23:05:09.146558 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 23:05:09.146572 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 23:05:09.146583 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 23:05:09.146594 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 23:05:09.146605 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 23:05:09.146616 systemd[1]: Reached target machines.target - Containers. Nov 23 23:05:09.146627 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 23:05:09.146639 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:05:09.146650 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:05:09.146661 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 23:05:09.146673 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:05:09.146684 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:05:09.146695 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:05:09.146705 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 23:05:09.146716 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:05:09.146728 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 23:05:09.146738 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 23:05:09.146773 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 23:05:09.146787 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 23:05:09.146798 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 23:05:09.146809 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:05:09.146820 kernel: fuse: init (API version 7.41) Nov 23 23:05:09.146830 kernel: loop: module loaded Nov 23 23:05:09.146904 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:05:09.146919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:05:09.146930 kernel: ACPI: bus type drm_connector registered Nov 23 23:05:09.146940 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:05:09.146954 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 23:05:09.146965 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 23:05:09.146976 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:05:09.146987 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 23:05:09.146997 systemd[1]: Stopped verity-setup.service. Nov 23 23:05:09.147010 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 23:05:09.147021 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 23:05:09.147032 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 23:05:09.147043 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 23:05:09.147054 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 23:05:09.147065 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 23:05:09.147075 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:05:09.147088 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 23:05:09.147099 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 23:05:09.147110 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:05:09.147121 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:05:09.147132 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:05:09.147143 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:05:09.147153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:05:09.147165 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:05:09.147182 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 23:05:09.147199 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 23:05:09.147212 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:05:09.147223 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:05:09.147234 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:05:09.147308 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:05:09.147325 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 23:05:09.147336 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:05:09.147350 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 23:05:09.147395 systemd-journald[1152]: Collecting audit messages is disabled. Nov 23 23:05:09.147420 systemd-journald[1152]: Journal started Nov 23 23:05:09.147443 systemd-journald[1152]: Runtime Journal (/run/log/journal/dae6ca0627e74d30bc03e122a9868373) is 6M, max 48.5M, 42.4M free. Nov 23 23:05:08.857353 systemd[1]: Queued start job for default target multi-user.target. Nov 23 23:05:08.880297 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 23 23:05:08.880689 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 23:05:09.151013 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 23:05:09.152785 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 23:05:09.152824 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:05:09.156792 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 23:05:09.161863 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 23:05:09.163812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:05:09.170976 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 23:05:09.173794 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:05:09.176808 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 23:05:09.181824 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:05:09.183791 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:05:09.189492 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 23:05:09.205133 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:05:09.205214 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:05:09.209156 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 23:05:09.211745 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 23:05:09.214709 kernel: loop0: detected capacity change from 0 to 207008 Nov 23 23:05:09.213454 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:05:09.215256 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 23:05:09.217007 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 23:05:09.222710 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:05:09.230303 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 23:05:09.232784 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 23:05:09.244040 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Nov 23 23:05:09.244058 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Nov 23 23:05:09.245070 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 23:05:09.248075 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 23:05:09.254786 kernel: loop1: detected capacity change from 0 to 100632 Nov 23 23:05:09.252541 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 23:05:09.265543 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:05:09.270146 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 23:05:09.274356 systemd-journald[1152]: Time spent on flushing to /var/log/journal/dae6ca0627e74d30bc03e122a9868373 is 12.131ms for 899 entries. Nov 23 23:05:09.274356 systemd-journald[1152]: System Journal (/var/log/journal/dae6ca0627e74d30bc03e122a9868373) is 8M, max 195.6M, 187.6M free. Nov 23 23:05:09.293558 systemd-journald[1152]: Received client request to flush runtime journal. Nov 23 23:05:09.293598 kernel: loop2: detected capacity change from 0 to 119840 Nov 23 23:05:09.279738 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 23:05:09.295280 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 23:05:09.317846 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 23:05:09.320857 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:05:09.326896 kernel: loop3: detected capacity change from 0 to 207008 Nov 23 23:05:09.336776 kernel: loop4: detected capacity change from 0 to 100632 Nov 23 23:05:09.347783 kernel: loop5: detected capacity change from 0 to 119840 Nov 23 23:05:09.358382 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 23 23:05:09.358462 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Nov 23 23:05:09.358473 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Nov 23 23:05:09.358810 (sd-merge)[1226]: Merged extensions into '/usr'. Nov 23 23:05:09.362823 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:05:09.367429 systemd[1]: Reload requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 23:05:09.367454 systemd[1]: Reloading... Nov 23 23:05:09.431845 zram_generator::config[1253]: No configuration found. Nov 23 23:05:09.540576 ldconfig[1177]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 23:05:09.598471 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 23:05:09.598892 systemd[1]: Reloading finished in 231 ms. Nov 23 23:05:09.628861 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 23:05:09.630469 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 23:05:09.648073 systemd[1]: Starting ensure-sysext.service... Nov 23 23:05:09.650144 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:05:09.655910 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 23:05:09.659448 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:05:09.662932 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Nov 23 23:05:09.662950 systemd[1]: Reloading... Nov 23 23:05:09.666974 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 23:05:09.667481 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 23:05:09.667861 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 23:05:09.668156 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 23:05:09.668867 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 23:05:09.669156 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Nov 23 23:05:09.669289 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Nov 23 23:05:09.672580 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:05:09.672690 systemd-tmpfiles[1289]: Skipping /boot Nov 23 23:05:09.679076 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:05:09.679236 systemd-tmpfiles[1289]: Skipping /boot Nov 23 23:05:09.706674 systemd-udevd[1292]: Using default interface naming scheme 'v255'. Nov 23 23:05:09.714775 zram_generator::config[1320]: No configuration found. Nov 23 23:05:09.900403 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 23 23:05:09.901856 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 23 23:05:09.902307 systemd[1]: Reloading finished in 239 ms. Nov 23 23:05:09.918266 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:05:09.924618 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:05:09.957537 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:05:09.960282 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 23:05:09.961676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:05:09.973731 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:05:09.976250 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:05:09.980380 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:05:09.982894 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:05:09.984157 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:05:09.985400 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 23:05:09.986559 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:05:09.989426 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 23:05:09.992396 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:05:09.997746 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:05:10.002667 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 23:05:10.006107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:05:10.009258 systemd[1]: Finished ensure-sysext.service. Nov 23 23:05:10.010523 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:05:10.010740 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:05:10.012603 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:05:10.012780 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:05:10.014644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:05:10.014844 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:05:10.018623 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:05:10.018986 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:05:10.020539 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 23:05:10.028552 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 23:05:10.031944 augenrules[1440]: No rules Nov 23 23:05:10.032893 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:05:10.033107 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:05:10.036389 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:05:10.036554 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:05:10.038737 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 23 23:05:10.041070 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 23:05:10.044934 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 23:05:10.051990 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 23:05:10.053706 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 23:05:10.057591 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 23:05:10.061903 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 23:05:10.063333 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:05:10.085780 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 23:05:10.139614 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 23 23:05:10.139640 systemd-networkd[1417]: lo: Link UP Nov 23 23:05:10.139644 systemd-networkd[1417]: lo: Gained carrier Nov 23 23:05:10.140958 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 23:05:10.141079 systemd-networkd[1417]: Enumeration completed Nov 23 23:05:10.141511 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:05:10.141515 systemd-networkd[1417]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:05:10.142027 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:05:10.142131 systemd-resolved[1418]: Positive Trust Anchors: Nov 23 23:05:10.142147 systemd-resolved[1418]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:05:10.142187 systemd-resolved[1418]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:05:10.142668 systemd-networkd[1417]: eth0: Link UP Nov 23 23:05:10.142907 systemd-networkd[1417]: eth0: Gained carrier Nov 23 23:05:10.142981 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:05:10.144626 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 23:05:10.146936 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 23:05:10.148804 systemd-resolved[1418]: Defaulting to hostname 'linux'. Nov 23 23:05:10.150104 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:05:10.151165 systemd[1]: Reached target network.target - Network. Nov 23 23:05:10.152243 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:05:10.153347 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:05:10.154368 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 23:05:10.155608 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 23:05:10.157976 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 23:05:10.159109 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 23:05:10.160624 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 23:05:10.161930 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 23:05:10.161966 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:05:10.162779 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:05:10.162825 systemd-networkd[1417]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 23 23:05:10.163619 systemd-timesyncd[1447]: Network configuration changed, trying to establish connection. Nov 23 23:05:10.164933 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 23:05:10.165035 systemd-timesyncd[1447]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 23 23:05:10.165084 systemd-timesyncd[1447]: Initial clock synchronization to Sun 2025-11-23 23:05:10.261572 UTC. Nov 23 23:05:10.167247 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 23:05:10.169765 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 23:05:10.171090 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 23:05:10.172327 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 23:05:10.175388 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 23:05:10.176949 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 23:05:10.179801 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 23:05:10.182019 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 23:05:10.183628 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:05:10.184658 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:05:10.185701 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:05:10.185737 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:05:10.186848 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 23:05:10.188881 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 23:05:10.190650 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 23:05:10.199581 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 23:05:10.201855 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 23:05:10.202823 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 23:05:10.203892 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 23:05:10.207866 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 23:05:10.208351 jq[1472]: false Nov 23 23:05:10.211084 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 23:05:10.213107 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 23:05:10.214691 extend-filesystems[1473]: Found /dev/vda6 Nov 23 23:05:10.216399 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 23:05:10.219373 extend-filesystems[1473]: Found /dev/vda9 Nov 23 23:05:10.219714 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 23:05:10.220155 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 23:05:10.220828 extend-filesystems[1473]: Checking size of /dev/vda9 Nov 23 23:05:10.220857 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 23:05:10.223747 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 23:05:10.231305 jq[1491]: true Nov 23 23:05:10.232852 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 23:05:10.235156 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 23:05:10.235370 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 23:05:10.235632 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 23:05:10.235846 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 23:05:10.245229 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 23:05:10.245437 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 23:05:10.247405 update_engine[1489]: I20251123 23:05:10.247093 1489 main.cc:92] Flatcar Update Engine starting Nov 23 23:05:10.248153 extend-filesystems[1473]: Resized partition /dev/vda9 Nov 23 23:05:10.251634 extend-filesystems[1502]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 23:05:10.258792 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 23 23:05:10.265140 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 23:05:10.276715 jq[1500]: true Nov 23 23:05:10.291405 tar[1499]: linux-arm64/LICENSE Nov 23 23:05:10.291405 tar[1499]: linux-arm64/helm Nov 23 23:05:10.292164 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 23 23:05:10.307887 extend-filesystems[1502]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 23 23:05:10.307887 extend-filesystems[1502]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 23 23:05:10.307887 extend-filesystems[1502]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 23 23:05:10.314019 extend-filesystems[1473]: Resized filesystem in /dev/vda9 Nov 23 23:05:10.312115 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 23:05:10.312354 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 23:05:10.318190 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 23:05:10.319833 systemd-logind[1486]: New seat seat0. Nov 23 23:05:10.321715 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 23:05:10.324038 dbus-daemon[1470]: [system] SELinux support is enabled Nov 23 23:05:10.324214 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 23:05:10.328185 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 23:05:10.328216 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 23:05:10.329061 dbus-daemon[1470]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 23 23:05:10.329617 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 23:05:10.329634 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 23:05:10.332353 systemd[1]: Started update-engine.service - Update Engine. Nov 23 23:05:10.335021 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 23:05:10.336332 update_engine[1489]: I20251123 23:05:10.336254 1489 update_check_scheduler.cc:74] Next update check in 8m47s Nov 23 23:05:10.345787 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Nov 23 23:05:10.350318 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 23:05:10.352656 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 23 23:05:10.386893 locksmithd[1531]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 23:05:10.455696 containerd[1501]: time="2025-11-23T23:05:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 23:05:10.456202 containerd[1501]: time="2025-11-23T23:05:10.456128640Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 23:05:10.469198 containerd[1501]: time="2025-11-23T23:05:10.469135280Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.52µs" Nov 23 23:05:10.469198 containerd[1501]: time="2025-11-23T23:05:10.469181600Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 23:05:10.469198 containerd[1501]: time="2025-11-23T23:05:10.469201320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 23:05:10.469578 containerd[1501]: time="2025-11-23T23:05:10.469363000Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 23:05:10.469578 containerd[1501]: time="2025-11-23T23:05:10.469384000Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 23:05:10.469578 containerd[1501]: time="2025-11-23T23:05:10.469408400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:05:10.469578 containerd[1501]: time="2025-11-23T23:05:10.469461640Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:05:10.469578 containerd[1501]: time="2025-11-23T23:05:10.469474000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:05:10.469712 containerd[1501]: time="2025-11-23T23:05:10.469687440Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:05:10.469712 containerd[1501]: time="2025-11-23T23:05:10.469709840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:05:10.469770 containerd[1501]: time="2025-11-23T23:05:10.469721040Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:05:10.469770 containerd[1501]: time="2025-11-23T23:05:10.469729360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 23:05:10.469847 containerd[1501]: time="2025-11-23T23:05:10.469823680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 23:05:10.470057 containerd[1501]: time="2025-11-23T23:05:10.470021680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:05:10.470100 containerd[1501]: time="2025-11-23T23:05:10.470056880Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:05:10.470100 containerd[1501]: time="2025-11-23T23:05:10.470066560Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 23:05:10.470201 containerd[1501]: time="2025-11-23T23:05:10.470116840Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 23:05:10.470448 containerd[1501]: time="2025-11-23T23:05:10.470417560Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 23:05:10.470795 containerd[1501]: time="2025-11-23T23:05:10.470712880Z" level=info msg="metadata content store policy set" policy=shared Nov 23 23:05:10.564561 containerd[1501]: time="2025-11-23T23:05:10.564513920Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.564707640Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.564745560Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.564788800Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.564804040Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.564816960Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.564830640Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.564843120Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.564855600Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.564865960Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.564875120Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.564887520Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.565040720Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.565062120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 23:05:10.566534 containerd[1501]: time="2025-11-23T23:05:10.565077200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 23:05:10.566867 containerd[1501]: time="2025-11-23T23:05:10.565089280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 23:05:10.566867 containerd[1501]: time="2025-11-23T23:05:10.565100960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 23:05:10.566867 containerd[1501]: time="2025-11-23T23:05:10.565111480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 23:05:10.566867 containerd[1501]: time="2025-11-23T23:05:10.565122960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 23:05:10.566867 containerd[1501]: time="2025-11-23T23:05:10.565135360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 23:05:10.566867 containerd[1501]: time="2025-11-23T23:05:10.565147240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 23:05:10.566867 containerd[1501]: time="2025-11-23T23:05:10.565157760Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 23:05:10.566867 containerd[1501]: time="2025-11-23T23:05:10.565168120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 23:05:10.566867 containerd[1501]: time="2025-11-23T23:05:10.565416280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 23:05:10.566867 containerd[1501]: time="2025-11-23T23:05:10.565432720Z" level=info msg="Start snapshots syncer" Nov 23 23:05:10.566867 containerd[1501]: time="2025-11-23T23:05:10.565462880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 23:05:10.567054 containerd[1501]: time="2025-11-23T23:05:10.565717320Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 23:05:10.567054 containerd[1501]: time="2025-11-23T23:05:10.565791160Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.565950960Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.566145400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.566216000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.566237920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.566251560Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.566271680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.566289120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.566304360Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.566351200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.566369720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.566384880Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.566434160Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.566453840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:05:10.567165 containerd[1501]: time="2025-11-23T23:05:10.566467800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:05:10.567436 containerd[1501]: time="2025-11-23T23:05:10.566479920Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:05:10.567436 containerd[1501]: time="2025-11-23T23:05:10.566526840Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 23:05:10.567436 containerd[1501]: time="2025-11-23T23:05:10.566584360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 23:05:10.567797 containerd[1501]: time="2025-11-23T23:05:10.566606520Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 23:05:10.567903 containerd[1501]: time="2025-11-23T23:05:10.567882800Z" level=info msg="runtime interface created" Nov 23 23:05:10.567903 containerd[1501]: time="2025-11-23T23:05:10.567895080Z" level=info msg="created NRI interface" Nov 23 23:05:10.567951 containerd[1501]: time="2025-11-23T23:05:10.567910200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 23:05:10.567951 containerd[1501]: time="2025-11-23T23:05:10.567929560Z" level=info msg="Connect containerd service" Nov 23 23:05:10.568001 containerd[1501]: time="2025-11-23T23:05:10.567984160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 23:05:10.570473 containerd[1501]: time="2025-11-23T23:05:10.570438040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:05:10.599790 tar[1499]: linux-arm64/README.md Nov 23 23:05:10.618441 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 23:05:10.649598 containerd[1501]: time="2025-11-23T23:05:10.649511360Z" level=info msg="Start subscribing containerd event" Nov 23 23:05:10.649598 containerd[1501]: time="2025-11-23T23:05:10.649590920Z" level=info msg="Start recovering state" Nov 23 23:05:10.649718 containerd[1501]: time="2025-11-23T23:05:10.649680520Z" level=info msg="Start event monitor" Nov 23 23:05:10.649718 containerd[1501]: time="2025-11-23T23:05:10.649692920Z" level=info msg="Start cni network conf syncer for default" Nov 23 23:05:10.649718 containerd[1501]: time="2025-11-23T23:05:10.649700200Z" level=info msg="Start streaming server" Nov 23 23:05:10.649718 containerd[1501]: time="2025-11-23T23:05:10.649708040Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 23:05:10.649718 containerd[1501]: time="2025-11-23T23:05:10.649714720Z" level=info msg="runtime interface starting up..." Nov 23 23:05:10.649718 containerd[1501]: time="2025-11-23T23:05:10.649720000Z" level=info msg="starting plugins..." Nov 23 23:05:10.649860 containerd[1501]: time="2025-11-23T23:05:10.649731840Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 23:05:10.650223 containerd[1501]: time="2025-11-23T23:05:10.650166720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 23:05:10.650252 containerd[1501]: time="2025-11-23T23:05:10.650237920Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 23:05:10.650351 containerd[1501]: time="2025-11-23T23:05:10.650338800Z" level=info msg="containerd successfully booted in 0.195044s" Nov 23 23:05:10.652893 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 23:05:11.317696 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 23:05:11.337142 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 23:05:11.339898 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 23:05:11.363447 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 23:05:11.364812 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 23:05:11.367328 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 23:05:11.396169 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 23:05:11.398905 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 23:05:11.400978 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 23 23:05:11.402376 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 23:05:11.869318 systemd-networkd[1417]: eth0: Gained IPv6LL Nov 23 23:05:11.871654 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 23:05:11.873538 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 23:05:11.876215 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 23 23:05:11.880285 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:05:11.898846 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 23:05:11.916520 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 23 23:05:11.916751 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 23 23:05:11.918991 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 23:05:11.922467 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 23:05:12.530016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:12.531654 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 23:05:12.534794 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:05:12.536972 systemd[1]: Startup finished in 2.147s (kernel) + 4.846s (initrd) + 4.136s (userspace) = 11.131s. Nov 23 23:05:12.918834 kubelet[1601]: E1123 23:05:12.918669 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:05:12.920936 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:05:12.921074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:05:12.921435 systemd[1]: kubelet.service: Consumed 759ms CPU time, 255.5M memory peak. Nov 23 23:05:16.759278 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 23:05:16.760368 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:52114.service - OpenSSH per-connection server daemon (10.0.0.1:52114). Nov 23 23:05:16.844765 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 52114 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:05:16.847060 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:05:16.853570 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 23:05:16.854583 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 23:05:16.860891 systemd-logind[1486]: New session 1 of user core. Nov 23 23:05:16.876800 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 23:05:16.880127 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 23:05:16.899852 (systemd)[1619]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 23:05:16.902979 systemd-logind[1486]: New session c1 of user core. Nov 23 23:05:17.018144 systemd[1619]: Queued start job for default target default.target. Nov 23 23:05:17.039870 systemd[1619]: Created slice app.slice - User Application Slice. Nov 23 23:05:17.039914 systemd[1619]: Reached target paths.target - Paths. Nov 23 23:05:17.039957 systemd[1619]: Reached target timers.target - Timers. Nov 23 23:05:17.041212 systemd[1619]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 23:05:17.052325 systemd[1619]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 23:05:17.052397 systemd[1619]: Reached target sockets.target - Sockets. Nov 23 23:05:17.052442 systemd[1619]: Reached target basic.target - Basic System. Nov 23 23:05:17.052472 systemd[1619]: Reached target default.target - Main User Target. Nov 23 23:05:17.052498 systemd[1619]: Startup finished in 142ms. Nov 23 23:05:17.052536 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 23:05:17.054690 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 23:05:17.127289 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:52120.service - OpenSSH per-connection server daemon (10.0.0.1:52120). Nov 23 23:05:17.186728 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 52120 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:05:17.188108 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:05:17.192242 systemd-logind[1486]: New session 2 of user core. Nov 23 23:05:17.207992 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 23:05:17.259724 sshd[1633]: Connection closed by 10.0.0.1 port 52120 Nov 23 23:05:17.260289 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Nov 23 23:05:17.275043 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:52120.service: Deactivated successfully. Nov 23 23:05:17.276841 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 23:05:17.277520 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Nov 23 23:05:17.279988 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:52128.service - OpenSSH per-connection server daemon (10.0.0.1:52128). Nov 23 23:05:17.281400 systemd-logind[1486]: Removed session 2. Nov 23 23:05:17.339864 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 52128 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:05:17.341791 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:05:17.346808 systemd-logind[1486]: New session 3 of user core. Nov 23 23:05:17.369989 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 23:05:17.422092 sshd[1642]: Connection closed by 10.0.0.1 port 52128 Nov 23 23:05:17.422466 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Nov 23 23:05:17.438174 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:52128.service: Deactivated successfully. Nov 23 23:05:17.440070 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 23:05:17.442061 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Nov 23 23:05:17.445222 systemd-logind[1486]: Removed session 3. Nov 23 23:05:17.446315 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:52132.service - OpenSSH per-connection server daemon (10.0.0.1:52132). Nov 23 23:05:17.517976 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 52132 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:05:17.519414 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:05:17.526433 systemd-logind[1486]: New session 4 of user core. Nov 23 23:05:17.531977 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 23:05:17.585573 sshd[1651]: Connection closed by 10.0.0.1 port 52132 Nov 23 23:05:17.587613 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Nov 23 23:05:17.599776 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:52132.service: Deactivated successfully. Nov 23 23:05:17.601550 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 23:05:17.602626 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Nov 23 23:05:17.605152 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:52140.service - OpenSSH per-connection server daemon (10.0.0.1:52140). Nov 23 23:05:17.605863 systemd-logind[1486]: Removed session 4. Nov 23 23:05:17.677454 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 52140 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:05:17.678772 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:05:17.683518 systemd-logind[1486]: New session 5 of user core. Nov 23 23:05:17.695994 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 23:05:17.760093 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 23:05:17.761169 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:05:17.785835 sudo[1661]: pam_unix(sudo:session): session closed for user root Nov 23 23:05:17.787883 sshd[1660]: Connection closed by 10.0.0.1 port 52140 Nov 23 23:05:17.788197 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Nov 23 23:05:17.805405 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:52140.service: Deactivated successfully. Nov 23 23:05:17.808272 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 23:05:17.809082 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Nov 23 23:05:17.812048 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:52156.service - OpenSSH per-connection server daemon (10.0.0.1:52156). Nov 23 23:05:17.813153 systemd-logind[1486]: Removed session 5. Nov 23 23:05:17.867249 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 52156 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:05:17.868588 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:05:17.873047 systemd-logind[1486]: New session 6 of user core. Nov 23 23:05:17.879942 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 23:05:17.931461 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 23:05:17.931710 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:05:18.008374 sudo[1673]: pam_unix(sudo:session): session closed for user root Nov 23 23:05:18.014055 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 23:05:18.014499 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:05:18.026837 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:05:18.070054 augenrules[1695]: No rules Nov 23 23:05:18.071194 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:05:18.071418 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:05:18.078359 sudo[1672]: pam_unix(sudo:session): session closed for user root Nov 23 23:05:18.081618 sshd[1671]: Connection closed by 10.0.0.1 port 52156 Nov 23 23:05:18.082055 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Nov 23 23:05:18.092442 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:52156.service: Deactivated successfully. Nov 23 23:05:18.094178 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 23:05:18.095843 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Nov 23 23:05:18.098029 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:52162.service - OpenSSH per-connection server daemon (10.0.0.1:52162). Nov 23 23:05:18.100330 systemd-logind[1486]: Removed session 6. Nov 23 23:05:18.157168 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 52162 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:05:18.160283 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:05:18.167233 systemd-logind[1486]: New session 7 of user core. Nov 23 23:05:18.189006 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 23:05:18.243044 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 23:05:18.243301 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:05:18.541436 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 23:05:18.557192 (dockerd)[1730]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 23:05:18.776559 dockerd[1730]: time="2025-11-23T23:05:18.776487005Z" level=info msg="Starting up" Nov 23 23:05:18.777511 dockerd[1730]: time="2025-11-23T23:05:18.777489082Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 23:05:18.795078 dockerd[1730]: time="2025-11-23T23:05:18.794822319Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 23:05:18.822789 systemd[1]: var-lib-docker-metacopy\x2dcheck2782838746-merged.mount: Deactivated successfully. Nov 23 23:05:18.836129 dockerd[1730]: time="2025-11-23T23:05:18.836078623Z" level=info msg="Loading containers: start." Nov 23 23:05:18.847786 kernel: Initializing XFRM netlink socket Nov 23 23:05:19.095194 systemd-networkd[1417]: docker0: Link UP Nov 23 23:05:19.099492 dockerd[1730]: time="2025-11-23T23:05:19.099441352Z" level=info msg="Loading containers: done." Nov 23 23:05:19.117910 dockerd[1730]: time="2025-11-23T23:05:19.117853410Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 23:05:19.118077 dockerd[1730]: time="2025-11-23T23:05:19.117938756Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 23:05:19.118077 dockerd[1730]: time="2025-11-23T23:05:19.118023420Z" level=info msg="Initializing buildkit" Nov 23 23:05:19.152595 dockerd[1730]: time="2025-11-23T23:05:19.152536882Z" level=info msg="Completed buildkit initialization" Nov 23 23:05:19.157959 dockerd[1730]: time="2025-11-23T23:05:19.157906880Z" level=info msg="Daemon has completed initialization" Nov 23 23:05:19.158137 dockerd[1730]: time="2025-11-23T23:05:19.158034377Z" level=info msg="API listen on /run/docker.sock" Nov 23 23:05:19.158186 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 23:05:19.674930 containerd[1501]: time="2025-11-23T23:05:19.674869063Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Nov 23 23:05:19.808471 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2860948942-merged.mount: Deactivated successfully. Nov 23 23:05:20.203564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount53287823.mount: Deactivated successfully. Nov 23 23:05:21.035429 containerd[1501]: time="2025-11-23T23:05:21.035363608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:21.036487 containerd[1501]: time="2025-11-23T23:05:21.036441330Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26431961" Nov 23 23:05:21.037485 containerd[1501]: time="2025-11-23T23:05:21.037450636Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:21.040676 containerd[1501]: time="2025-11-23T23:05:21.040633804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:21.042183 containerd[1501]: time="2025-11-23T23:05:21.042143690Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.367228305s" Nov 23 23:05:21.042231 containerd[1501]: time="2025-11-23T23:05:21.042183536Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Nov 23 23:05:21.042905 containerd[1501]: time="2025-11-23T23:05:21.042847070Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Nov 23 23:05:22.063662 containerd[1501]: time="2025-11-23T23:05:22.063608431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:22.064646 containerd[1501]: time="2025-11-23T23:05:22.064547772Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618957" Nov 23 23:05:22.065355 containerd[1501]: time="2025-11-23T23:05:22.065308704Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:22.068598 containerd[1501]: time="2025-11-23T23:05:22.068566186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:22.070352 containerd[1501]: time="2025-11-23T23:05:22.070220241Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.027295086s" Nov 23 23:05:22.070352 containerd[1501]: time="2025-11-23T23:05:22.070265776Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Nov 23 23:05:22.070686 containerd[1501]: time="2025-11-23T23:05:22.070666362Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Nov 23 23:05:22.996632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 23:05:23.000032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:05:23.186594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:23.190430 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:05:23.237236 kubelet[2020]: E1123 23:05:23.237182 2020 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:05:23.240375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:05:23.240504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:05:23.240903 systemd[1]: kubelet.service: Consumed 147ms CPU time, 106.9M memory peak. Nov 23 23:05:23.262277 containerd[1501]: time="2025-11-23T23:05:23.262173129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:23.263407 containerd[1501]: time="2025-11-23T23:05:23.263357456Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618438" Nov 23 23:05:23.264417 containerd[1501]: time="2025-11-23T23:05:23.264364731Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:23.267772 containerd[1501]: time="2025-11-23T23:05:23.267703357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:23.269507 containerd[1501]: time="2025-11-23T23:05:23.269395333Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.198701612s" Nov 23 23:05:23.269507 containerd[1501]: time="2025-11-23T23:05:23.269428064Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Nov 23 23:05:23.269894 containerd[1501]: time="2025-11-23T23:05:23.269868166Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Nov 23 23:05:24.422991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2852434964.mount: Deactivated successfully. Nov 23 23:05:24.771287 containerd[1501]: time="2025-11-23T23:05:24.771231847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:24.772674 containerd[1501]: time="2025-11-23T23:05:24.772626717Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561801" Nov 23 23:05:24.773468 containerd[1501]: time="2025-11-23T23:05:24.773435783Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:24.775421 containerd[1501]: time="2025-11-23T23:05:24.775369816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:24.776099 containerd[1501]: time="2025-11-23T23:05:24.775845654Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.505943998s" Nov 23 23:05:24.776099 containerd[1501]: time="2025-11-23T23:05:24.775878981Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Nov 23 23:05:24.776322 containerd[1501]: time="2025-11-23T23:05:24.776264665Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 23 23:05:25.328744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267247237.mount: Deactivated successfully. Nov 23 23:05:26.039820 containerd[1501]: time="2025-11-23T23:05:26.039775721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:26.040714 containerd[1501]: time="2025-11-23T23:05:26.040671772Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Nov 23 23:05:26.041686 containerd[1501]: time="2025-11-23T23:05:26.041652816Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:26.044722 containerd[1501]: time="2025-11-23T23:05:26.044686076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:26.046390 containerd[1501]: time="2025-11-23T23:05:26.046353972Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.270058792s" Nov 23 23:05:26.046436 containerd[1501]: time="2025-11-23T23:05:26.046399316Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 23 23:05:26.046881 containerd[1501]: time="2025-11-23T23:05:26.046836316Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 23 23:05:26.472896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929201624.mount: Deactivated successfully. Nov 23 23:05:26.480611 containerd[1501]: time="2025-11-23T23:05:26.480549445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:05:26.481704 containerd[1501]: time="2025-11-23T23:05:26.481678588Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 23 23:05:26.484932 containerd[1501]: time="2025-11-23T23:05:26.484859105Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:05:26.487396 containerd[1501]: time="2025-11-23T23:05:26.487342026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:05:26.487983 containerd[1501]: time="2025-11-23T23:05:26.487955911Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 441.067676ms" Nov 23 23:05:26.488048 containerd[1501]: time="2025-11-23T23:05:26.487990029Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 23 23:05:26.488634 containerd[1501]: time="2025-11-23T23:05:26.488603713Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 23 23:05:26.987984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468512068.mount: Deactivated successfully. Nov 23 23:05:28.596962 containerd[1501]: time="2025-11-23T23:05:28.596911358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:28.597970 containerd[1501]: time="2025-11-23T23:05:28.597932653Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Nov 23 23:05:28.599947 containerd[1501]: time="2025-11-23T23:05:28.599892114Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:28.603155 containerd[1501]: time="2025-11-23T23:05:28.603091309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:28.604344 containerd[1501]: time="2025-11-23T23:05:28.604221262Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.115581469s" Nov 23 23:05:28.604344 containerd[1501]: time="2025-11-23T23:05:28.604255010Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 23 23:05:33.246703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 23:05:33.248704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:05:33.416262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:33.420664 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:05:33.459095 kubelet[2178]: E1123 23:05:33.459014 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:05:33.461611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:05:33.461781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:05:33.462425 systemd[1]: kubelet.service: Consumed 145ms CPU time, 106.8M memory peak. Nov 23 23:05:33.829491 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:33.829637 systemd[1]: kubelet.service: Consumed 145ms CPU time, 106.8M memory peak. Nov 23 23:05:33.833908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:05:33.853978 systemd[1]: Reload requested from client PID 2192 ('systemctl') (unit session-7.scope)... Nov 23 23:05:33.853992 systemd[1]: Reloading... Nov 23 23:05:33.929976 zram_generator::config[2236]: No configuration found. Nov 23 23:05:34.105491 systemd[1]: Reloading finished in 251 ms. Nov 23 23:05:34.165286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:05:34.167265 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:05:34.167668 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:34.167730 systemd[1]: kubelet.service: Consumed 101ms CPU time, 95.2M memory peak. Nov 23 23:05:34.169410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:05:34.301945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:34.306700 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:05:34.346142 kubelet[2283]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:05:34.346142 kubelet[2283]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:05:34.346142 kubelet[2283]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:05:34.346491 kubelet[2283]: I1123 23:05:34.346203 2283 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:05:35.409578 kubelet[2283]: I1123 23:05:35.409524 2283 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 23:05:35.409578 kubelet[2283]: I1123 23:05:35.409561 2283 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:05:35.409993 kubelet[2283]: I1123 23:05:35.409901 2283 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 23:05:35.438582 kubelet[2283]: E1123 23:05:35.438523 2283 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:05:35.440725 kubelet[2283]: I1123 23:05:35.440683 2283 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:05:35.448292 kubelet[2283]: I1123 23:05:35.448224 2283 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:05:35.451320 kubelet[2283]: I1123 23:05:35.451283 2283 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:05:35.451585 kubelet[2283]: I1123 23:05:35.451540 2283 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:05:35.451761 kubelet[2283]: I1123 23:05:35.451573 2283 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:05:35.451881 kubelet[2283]: I1123 23:05:35.451868 2283 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:05:35.451881 kubelet[2283]: I1123 23:05:35.451881 2283 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 23:05:35.452107 kubelet[2283]: I1123 23:05:35.452080 2283 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:05:35.454591 kubelet[2283]: I1123 23:05:35.454556 2283 kubelet.go:446] "Attempting to sync node with API server" Nov 23 23:05:35.454684 kubelet[2283]: I1123 23:05:35.454587 2283 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:05:35.454746 kubelet[2283]: I1123 23:05:35.454704 2283 kubelet.go:352] "Adding apiserver pod source" Nov 23 23:05:35.454746 kubelet[2283]: I1123 23:05:35.454716 2283 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:05:35.457783 kubelet[2283]: W1123 23:05:35.457634 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Nov 23 23:05:35.457783 kubelet[2283]: E1123 23:05:35.457701 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:05:35.458810 kubelet[2283]: W1123 23:05:35.458766 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Nov 23 23:05:35.458867 kubelet[2283]: E1123 23:05:35.458823 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:05:35.458895 kubelet[2283]: I1123 23:05:35.458867 2283 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:05:35.459611 kubelet[2283]: I1123 23:05:35.459582 2283 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 23:05:35.460723 kubelet[2283]: W1123 23:05:35.460697 2283 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 23:05:35.461813 kubelet[2283]: I1123 23:05:35.461793 2283 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:05:35.461873 kubelet[2283]: I1123 23:05:35.461838 2283 server.go:1287] "Started kubelet" Nov 23 23:05:35.461968 kubelet[2283]: I1123 23:05:35.461936 2283 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:05:35.462979 kubelet[2283]: I1123 23:05:35.462956 2283 server.go:479] "Adding debug handlers to kubelet server" Nov 23 23:05:35.463830 kubelet[2283]: I1123 23:05:35.463730 2283 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:05:35.464258 kubelet[2283]: I1123 23:05:35.464232 2283 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:05:35.464985 kubelet[2283]: I1123 23:05:35.464956 2283 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:05:35.465040 kubelet[2283]: I1123 23:05:35.465014 2283 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:05:35.466331 kubelet[2283]: E1123 23:05:35.466286 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:05:35.466442 kubelet[2283]: E1123 23:05:35.466418 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" Nov 23 23:05:35.466852 kubelet[2283]: I1123 23:05:35.466835 2283 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:05:35.467035 kubelet[2283]: I1123 23:05:35.467021 2283 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:05:35.467204 kubelet[2283]: I1123 23:05:35.467188 2283 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:05:35.467683 kubelet[2283]: W1123 23:05:35.467639 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Nov 23 23:05:35.467799 kubelet[2283]: E1123 23:05:35.467781 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:05:35.467996 kubelet[2283]: I1123 23:05:35.467967 2283 factory.go:221] Registration of the containerd container factory successfully Nov 23 23:05:35.468053 kubelet[2283]: I1123 23:05:35.468044 2283 factory.go:221] Registration of the systemd container factory successfully Nov 23 23:05:35.468204 kubelet[2283]: I1123 23:05:35.468184 2283 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:05:35.468288 kubelet[2283]: E1123 23:05:35.468252 2283 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:05:35.468797 kubelet[2283]: E1123 23:05:35.468441 2283 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ac54c469c654c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-23 23:05:35.461811532 +0000 UTC m=+1.151795179,LastTimestamp:2025-11-23 23:05:35.461811532 +0000 UTC m=+1.151795179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 23 23:05:35.481279 kubelet[2283]: I1123 23:05:35.481241 2283 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:05:35.481279 kubelet[2283]: I1123 23:05:35.481265 2283 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:05:35.481279 kubelet[2283]: I1123 23:05:35.481285 2283 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:05:35.485505 kubelet[2283]: I1123 23:05:35.485442 2283 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 23:05:35.486570 kubelet[2283]: I1123 23:05:35.486538 2283 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 23:05:35.486570 kubelet[2283]: I1123 23:05:35.486563 2283 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 23:05:35.486662 kubelet[2283]: I1123 23:05:35.486586 2283 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:05:35.486662 kubelet[2283]: I1123 23:05:35.486594 2283 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 23:05:35.486662 kubelet[2283]: E1123 23:05:35.486649 2283 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:05:35.487315 kubelet[2283]: W1123 23:05:35.487286 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Nov 23 23:05:35.487348 kubelet[2283]: E1123 23:05:35.487326 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:05:35.566500 kubelet[2283]: E1123 23:05:35.566415 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:05:35.586763 kubelet[2283]: E1123 23:05:35.586708 2283 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 23 23:05:35.595567 kubelet[2283]: I1123 23:05:35.595527 2283 policy_none.go:49] "None policy: Start" Nov 23 23:05:35.595567 kubelet[2283]: I1123 23:05:35.595558 2283 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:05:35.595567 kubelet[2283]: I1123 23:05:35.595572 2283 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:05:35.603184 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 23:05:35.616724 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 23:05:35.619719 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 23:05:35.630742 kubelet[2283]: I1123 23:05:35.630677 2283 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 23:05:35.630976 kubelet[2283]: I1123 23:05:35.630943 2283 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:05:35.631011 kubelet[2283]: I1123 23:05:35.630963 2283 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:05:35.631342 kubelet[2283]: I1123 23:05:35.631218 2283 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:05:35.631987 kubelet[2283]: E1123 23:05:35.631960 2283 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:05:35.632068 kubelet[2283]: E1123 23:05:35.632021 2283 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 23 23:05:35.667513 kubelet[2283]: E1123 23:05:35.667393 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" Nov 23 23:05:35.732930 kubelet[2283]: I1123 23:05:35.732900 2283 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:05:35.733331 kubelet[2283]: E1123 23:05:35.733307 2283 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Nov 23 23:05:35.796657 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Nov 23 23:05:35.824131 kubelet[2283]: E1123 23:05:35.824092 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:35.827623 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Nov 23 23:05:35.829182 kubelet[2283]: E1123 23:05:35.829152 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:35.832037 systemd[1]: Created slice kubepods-burstable-podd533db7490180107e4294bcae2b6d4db.slice - libcontainer container kubepods-burstable-podd533db7490180107e4294bcae2b6d4db.slice. Nov 23 23:05:35.833804 kubelet[2283]: E1123 23:05:35.833597 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:35.868968 kubelet[2283]: I1123 23:05:35.868925 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:35.869165 kubelet[2283]: I1123 23:05:35.869146 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:35.869251 kubelet[2283]: I1123 23:05:35.869239 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:35.869377 kubelet[2283]: I1123 23:05:35.869333 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d533db7490180107e4294bcae2b6d4db-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d533db7490180107e4294bcae2b6d4db\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:35.869413 kubelet[2283]: I1123 23:05:35.869379 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:35.869440 kubelet[2283]: I1123 23:05:35.869410 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:35.869440 kubelet[2283]: I1123 23:05:35.869430 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:35.869484 kubelet[2283]: I1123 23:05:35.869449 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d533db7490180107e4294bcae2b6d4db-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d533db7490180107e4294bcae2b6d4db\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:35.869484 kubelet[2283]: I1123 23:05:35.869466 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d533db7490180107e4294bcae2b6d4db-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d533db7490180107e4294bcae2b6d4db\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:35.935303 kubelet[2283]: I1123 23:05:35.935102 2283 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:05:35.935944 kubelet[2283]: E1123 23:05:35.935901 2283 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Nov 23 23:05:36.068417 kubelet[2283]: E1123 23:05:36.068377 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" Nov 23 23:05:36.126487 containerd[1501]: time="2025-11-23T23:05:36.126448648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Nov 23 23:05:36.130082 containerd[1501]: time="2025-11-23T23:05:36.130049613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Nov 23 23:05:36.135764 containerd[1501]: time="2025-11-23T23:05:36.135594034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d533db7490180107e4294bcae2b6d4db,Namespace:kube-system,Attempt:0,}" Nov 23 23:05:36.148998 containerd[1501]: time="2025-11-23T23:05:36.148946674Z" level=info msg="connecting to shim f3892968b7d506074e3fa750d208253aafce1c9cc866fe5ff7ecef6dcd9fb646" address="unix:///run/containerd/s/917d0428f2fe0b1c67b441a270423fa3ce42998bb256a8541c207697d055b591" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:36.155183 containerd[1501]: time="2025-11-23T23:05:36.155141155Z" level=info msg="connecting to shim df83f797473d757ea1c9c1ff8cf3755a6fa95a1d481ee397d6f1afa296bbde69" address="unix:///run/containerd/s/b773b0566ced96ac0218999681b11b52574919d468f1d28d8c6f05a7c1247d5b" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:36.178154 containerd[1501]: time="2025-11-23T23:05:36.178102618Z" level=info msg="connecting to shim afa7642a511dc32a75e0288b28d1ca3c861397a6e90a775c625e833526fcde3e" address="unix:///run/containerd/s/12c93c074c6e79e3f9dedd277de262a0f7931866c9a662b875f7477b08ecb729" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:36.184914 systemd[1]: Started cri-containerd-df83f797473d757ea1c9c1ff8cf3755a6fa95a1d481ee397d6f1afa296bbde69.scope - libcontainer container df83f797473d757ea1c9c1ff8cf3755a6fa95a1d481ee397d6f1afa296bbde69. Nov 23 23:05:36.190104 systemd[1]: Started cri-containerd-f3892968b7d506074e3fa750d208253aafce1c9cc866fe5ff7ecef6dcd9fb646.scope - libcontainer container f3892968b7d506074e3fa750d208253aafce1c9cc866fe5ff7ecef6dcd9fb646. Nov 23 23:05:36.216985 systemd[1]: Started cri-containerd-afa7642a511dc32a75e0288b28d1ca3c861397a6e90a775c625e833526fcde3e.scope - libcontainer container afa7642a511dc32a75e0288b28d1ca3c861397a6e90a775c625e833526fcde3e. Nov 23 23:05:36.246956 containerd[1501]: time="2025-11-23T23:05:36.246912196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"df83f797473d757ea1c9c1ff8cf3755a6fa95a1d481ee397d6f1afa296bbde69\"" Nov 23 23:05:36.251055 containerd[1501]: time="2025-11-23T23:05:36.251017848Z" level=info msg="CreateContainer within sandbox \"df83f797473d757ea1c9c1ff8cf3755a6fa95a1d481ee397d6f1afa296bbde69\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 23:05:36.251248 containerd[1501]: time="2025-11-23T23:05:36.251216487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3892968b7d506074e3fa750d208253aafce1c9cc866fe5ff7ecef6dcd9fb646\"" Nov 23 23:05:36.254880 containerd[1501]: time="2025-11-23T23:05:36.254844205Z" level=info msg="CreateContainer within sandbox \"f3892968b7d506074e3fa750d208253aafce1c9cc866fe5ff7ecef6dcd9fb646\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 23:05:36.266101 containerd[1501]: time="2025-11-23T23:05:36.265148423Z" level=info msg="Container e69e3b94db5643c7dd5f6a41aa04e8f4beb0a848226e39e86300d7d66b6377df: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:36.266723 containerd[1501]: time="2025-11-23T23:05:36.266675097Z" level=info msg="Container c0df2f4b79ef9fa429c8811da88aaaf4bd36334176e35ebbf40a6c38cadd3be3: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:36.267423 containerd[1501]: time="2025-11-23T23:05:36.267395322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d533db7490180107e4294bcae2b6d4db,Namespace:kube-system,Attempt:0,} returns sandbox id \"afa7642a511dc32a75e0288b28d1ca3c861397a6e90a775c625e833526fcde3e\"" Nov 23 23:05:36.269804 containerd[1501]: time="2025-11-23T23:05:36.269771857Z" level=info msg="CreateContainer within sandbox \"afa7642a511dc32a75e0288b28d1ca3c861397a6e90a775c625e833526fcde3e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 23:05:36.274170 containerd[1501]: time="2025-11-23T23:05:36.274126568Z" level=info msg="CreateContainer within sandbox \"df83f797473d757ea1c9c1ff8cf3755a6fa95a1d481ee397d6f1afa296bbde69\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e69e3b94db5643c7dd5f6a41aa04e8f4beb0a848226e39e86300d7d66b6377df\"" Nov 23 23:05:36.274715 containerd[1501]: time="2025-11-23T23:05:36.274685319Z" level=info msg="StartContainer for \"e69e3b94db5643c7dd5f6a41aa04e8f4beb0a848226e39e86300d7d66b6377df\"" Nov 23 23:05:36.275576 containerd[1501]: time="2025-11-23T23:05:36.275520402Z" level=info msg="CreateContainer within sandbox \"f3892968b7d506074e3fa750d208253aafce1c9cc866fe5ff7ecef6dcd9fb646\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c0df2f4b79ef9fa429c8811da88aaaf4bd36334176e35ebbf40a6c38cadd3be3\"" Nov 23 23:05:36.275825 containerd[1501]: time="2025-11-23T23:05:36.275799137Z" level=info msg="connecting to shim e69e3b94db5643c7dd5f6a41aa04e8f4beb0a848226e39e86300d7d66b6377df" address="unix:///run/containerd/s/b773b0566ced96ac0218999681b11b52574919d468f1d28d8c6f05a7c1247d5b" protocol=ttrpc version=3 Nov 23 23:05:36.275946 containerd[1501]: time="2025-11-23T23:05:36.275919682Z" level=info msg="StartContainer for \"c0df2f4b79ef9fa429c8811da88aaaf4bd36334176e35ebbf40a6c38cadd3be3\"" Nov 23 23:05:36.277708 containerd[1501]: time="2025-11-23T23:05:36.277675151Z" level=info msg="connecting to shim c0df2f4b79ef9fa429c8811da88aaaf4bd36334176e35ebbf40a6c38cadd3be3" address="unix:///run/containerd/s/917d0428f2fe0b1c67b441a270423fa3ce42998bb256a8541c207697d055b591" protocol=ttrpc version=3 Nov 23 23:05:36.283784 containerd[1501]: time="2025-11-23T23:05:36.283670993Z" level=info msg="Container b0aadb617af28ad2d9ba2ac2ea8c12ce0d1c2e8481098c966d201e18869540d2: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:36.292010 containerd[1501]: time="2025-11-23T23:05:36.291965397Z" level=info msg="CreateContainer within sandbox \"afa7642a511dc32a75e0288b28d1ca3c861397a6e90a775c625e833526fcde3e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b0aadb617af28ad2d9ba2ac2ea8c12ce0d1c2e8481098c966d201e18869540d2\"" Nov 23 23:05:36.293339 containerd[1501]: time="2025-11-23T23:05:36.293281899Z" level=info msg="StartContainer for \"b0aadb617af28ad2d9ba2ac2ea8c12ce0d1c2e8481098c966d201e18869540d2\"" Nov 23 23:05:36.295691 containerd[1501]: time="2025-11-23T23:05:36.295651545Z" level=info msg="connecting to shim b0aadb617af28ad2d9ba2ac2ea8c12ce0d1c2e8481098c966d201e18869540d2" address="unix:///run/containerd/s/12c93c074c6e79e3f9dedd277de262a0f7931866c9a662b875f7477b08ecb729" protocol=ttrpc version=3 Nov 23 23:05:36.297999 systemd[1]: Started cri-containerd-e69e3b94db5643c7dd5f6a41aa04e8f4beb0a848226e39e86300d7d66b6377df.scope - libcontainer container e69e3b94db5643c7dd5f6a41aa04e8f4beb0a848226e39e86300d7d66b6377df. Nov 23 23:05:36.302468 systemd[1]: Started cri-containerd-c0df2f4b79ef9fa429c8811da88aaaf4bd36334176e35ebbf40a6c38cadd3be3.scope - libcontainer container c0df2f4b79ef9fa429c8811da88aaaf4bd36334176e35ebbf40a6c38cadd3be3. Nov 23 23:05:36.321997 systemd[1]: Started cri-containerd-b0aadb617af28ad2d9ba2ac2ea8c12ce0d1c2e8481098c966d201e18869540d2.scope - libcontainer container b0aadb617af28ad2d9ba2ac2ea8c12ce0d1c2e8481098c966d201e18869540d2. Nov 23 23:05:36.338010 kubelet[2283]: I1123 23:05:36.337951 2283 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:05:36.338395 kubelet[2283]: E1123 23:05:36.338364 2283 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Nov 23 23:05:36.365239 containerd[1501]: time="2025-11-23T23:05:36.365023680Z" level=info msg="StartContainer for \"c0df2f4b79ef9fa429c8811da88aaaf4bd36334176e35ebbf40a6c38cadd3be3\" returns successfully" Nov 23 23:05:36.367722 containerd[1501]: time="2025-11-23T23:05:36.367657443Z" level=info msg="StartContainer for \"e69e3b94db5643c7dd5f6a41aa04e8f4beb0a848226e39e86300d7d66b6377df\" returns successfully" Nov 23 23:05:36.381355 containerd[1501]: time="2025-11-23T23:05:36.381309884Z" level=info msg="StartContainer for \"b0aadb617af28ad2d9ba2ac2ea8c12ce0d1c2e8481098c966d201e18869540d2\" returns successfully" Nov 23 23:05:36.442845 kubelet[2283]: W1123 23:05:36.442672 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Nov 23 23:05:36.442845 kubelet[2283]: E1123 23:05:36.442747 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:05:36.498361 kubelet[2283]: E1123 23:05:36.498323 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:36.501776 kubelet[2283]: E1123 23:05:36.501664 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:36.503028 kubelet[2283]: E1123 23:05:36.503003 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:37.140291 kubelet[2283]: I1123 23:05:37.140258 2283 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:05:37.505821 kubelet[2283]: E1123 23:05:37.505121 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:37.505821 kubelet[2283]: E1123 23:05:37.505307 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:37.505821 kubelet[2283]: E1123 23:05:37.505405 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 23 23:05:37.811845 kubelet[2283]: E1123 23:05:37.810625 2283 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 23 23:05:37.887417 kubelet[2283]: I1123 23:05:37.887362 2283 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 23 23:05:37.887417 kubelet[2283]: E1123 23:05:37.887394 2283 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 23 23:05:37.896784 kubelet[2283]: E1123 23:05:37.896724 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:05:37.997699 kubelet[2283]: E1123 23:05:37.997659 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:05:38.098649 kubelet[2283]: E1123 23:05:38.098275 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:05:38.168015 kubelet[2283]: I1123 23:05:38.167969 2283 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:38.175533 kubelet[2283]: E1123 23:05:38.175476 2283 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:38.175533 kubelet[2283]: I1123 23:05:38.175513 2283 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:38.178668 kubelet[2283]: E1123 23:05:38.178616 2283 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:38.178668 kubelet[2283]: I1123 23:05:38.178647 2283 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:38.180874 kubelet[2283]: E1123 23:05:38.180844 2283 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:38.459772 kubelet[2283]: I1123 23:05:38.459625 2283 apiserver.go:52] "Watching apiserver" Nov 23 23:05:38.467762 kubelet[2283]: I1123 23:05:38.467711 2283 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:05:40.250060 systemd[1]: Reload requested from client PID 2558 ('systemctl') (unit session-7.scope)... Nov 23 23:05:40.250077 systemd[1]: Reloading... Nov 23 23:05:40.311873 zram_generator::config[2601]: No configuration found. Nov 23 23:05:40.586489 systemd[1]: Reloading finished in 336 ms. Nov 23 23:05:40.618295 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:05:40.618860 kubelet[2283]: I1123 23:05:40.618682 2283 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:05:40.637166 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:05:40.637412 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:40.637464 systemd[1]: kubelet.service: Consumed 1.530s CPU time, 127.5M memory peak. Nov 23 23:05:40.640044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:05:40.814852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:05:40.826202 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:05:40.875849 kubelet[2643]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:05:40.875849 kubelet[2643]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:05:40.875849 kubelet[2643]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:05:40.875849 kubelet[2643]: I1123 23:05:40.875640 2643 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:05:40.882554 kubelet[2643]: I1123 23:05:40.882513 2643 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 23:05:40.882765 kubelet[2643]: I1123 23:05:40.882736 2643 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:05:40.883148 kubelet[2643]: I1123 23:05:40.883126 2643 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 23:05:40.884793 kubelet[2643]: I1123 23:05:40.884743 2643 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 23 23:05:40.888405 kubelet[2643]: I1123 23:05:40.888357 2643 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:05:40.892422 kubelet[2643]: I1123 23:05:40.892401 2643 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:05:40.895017 kubelet[2643]: I1123 23:05:40.894999 2643 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:05:40.895205 kubelet[2643]: I1123 23:05:40.895179 2643 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:05:40.895391 kubelet[2643]: I1123 23:05:40.895206 2643 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:05:40.895470 kubelet[2643]: I1123 23:05:40.895402 2643 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:05:40.895470 kubelet[2643]: I1123 23:05:40.895412 2643 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 23:05:40.895470 kubelet[2643]: I1123 23:05:40.895464 2643 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:05:40.895606 kubelet[2643]: I1123 23:05:40.895593 2643 kubelet.go:446] "Attempting to sync node with API server" Nov 23 23:05:40.895636 kubelet[2643]: I1123 23:05:40.895618 2643 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:05:40.895667 kubelet[2643]: I1123 23:05:40.895639 2643 kubelet.go:352] "Adding apiserver pod source" Nov 23 23:05:40.895667 kubelet[2643]: I1123 23:05:40.895649 2643 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:05:40.896300 kubelet[2643]: I1123 23:05:40.896235 2643 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:05:40.896736 kubelet[2643]: I1123 23:05:40.896714 2643 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 23:05:40.900261 kubelet[2643]: I1123 23:05:40.897203 2643 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:05:40.900261 kubelet[2643]: I1123 23:05:40.897251 2643 server.go:1287] "Started kubelet" Nov 23 23:05:40.900261 kubelet[2643]: I1123 23:05:40.898377 2643 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:05:40.900261 kubelet[2643]: I1123 23:05:40.898627 2643 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:05:40.900261 kubelet[2643]: I1123 23:05:40.898717 2643 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:05:40.900261 kubelet[2643]: I1123 23:05:40.899906 2643 server.go:479] "Adding debug handlers to kubelet server" Nov 23 23:05:40.900261 kubelet[2643]: I1123 23:05:40.900057 2643 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:05:40.900787 kubelet[2643]: I1123 23:05:40.899911 2643 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:05:40.905510 kubelet[2643]: I1123 23:05:40.905485 2643 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:05:40.905887 kubelet[2643]: I1123 23:05:40.905738 2643 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:05:40.905968 kubelet[2643]: E1123 23:05:40.905940 2643 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 23 23:05:40.906188 kubelet[2643]: I1123 23:05:40.906171 2643 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:05:40.907913 kubelet[2643]: I1123 23:05:40.907881 2643 factory.go:221] Registration of the systemd container factory successfully Nov 23 23:05:40.908059 kubelet[2643]: I1123 23:05:40.907991 2643 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:05:40.909902 kubelet[2643]: I1123 23:05:40.909803 2643 factory.go:221] Registration of the containerd container factory successfully Nov 23 23:05:40.911212 kubelet[2643]: E1123 23:05:40.911103 2643 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:05:40.932929 kubelet[2643]: I1123 23:05:40.932867 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 23:05:40.934783 kubelet[2643]: I1123 23:05:40.934696 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 23:05:40.934783 kubelet[2643]: I1123 23:05:40.934733 2643 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 23:05:40.934951 kubelet[2643]: I1123 23:05:40.934828 2643 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:05:40.934951 kubelet[2643]: I1123 23:05:40.934839 2643 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 23:05:40.934951 kubelet[2643]: E1123 23:05:40.934880 2643 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:05:40.966449 kubelet[2643]: I1123 23:05:40.966419 2643 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:05:40.966449 kubelet[2643]: I1123 23:05:40.966440 2643 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:05:40.966449 kubelet[2643]: I1123 23:05:40.966460 2643 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:05:40.966658 kubelet[2643]: I1123 23:05:40.966640 2643 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 23:05:40.966658 kubelet[2643]: I1123 23:05:40.966652 2643 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 23:05:40.966658 kubelet[2643]: I1123 23:05:40.966671 2643 policy_none.go:49] "None policy: Start" Nov 23 23:05:40.966658 kubelet[2643]: I1123 23:05:40.966680 2643 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:05:40.966831 kubelet[2643]: I1123 23:05:40.966688 2643 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:05:40.966831 kubelet[2643]: I1123 23:05:40.966818 2643 state_mem.go:75] "Updated machine memory state" Nov 23 23:05:40.977326 kubelet[2643]: I1123 23:05:40.977132 2643 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 23:05:40.977676 kubelet[2643]: I1123 23:05:40.977532 2643 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:05:40.977676 kubelet[2643]: I1123 23:05:40.977554 2643 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:05:40.977858 kubelet[2643]: I1123 23:05:40.977839 2643 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:05:40.979286 kubelet[2643]: E1123 23:05:40.979261 2643 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:05:41.036062 kubelet[2643]: I1123 23:05:41.036014 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:41.037702 kubelet[2643]: I1123 23:05:41.037652 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:41.037835 kubelet[2643]: I1123 23:05:41.037663 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:41.079041 kubelet[2643]: I1123 23:05:41.078996 2643 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 23 23:05:41.091074 kubelet[2643]: I1123 23:05:41.091028 2643 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 23 23:05:41.091450 kubelet[2643]: I1123 23:05:41.091129 2643 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 23 23:05:41.108532 kubelet[2643]: I1123 23:05:41.108453 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:41.108532 kubelet[2643]: I1123 23:05:41.108495 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:41.108532 kubelet[2643]: I1123 23:05:41.108518 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d533db7490180107e4294bcae2b6d4db-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d533db7490180107e4294bcae2b6d4db\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:41.108859 kubelet[2643]: I1123 23:05:41.108564 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:41.108859 kubelet[2643]: I1123 23:05:41.108580 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:41.108859 kubelet[2643]: I1123 23:05:41.108596 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:41.108859 kubelet[2643]: I1123 23:05:41.108610 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:41.108859 kubelet[2643]: I1123 23:05:41.108630 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d533db7490180107e4294bcae2b6d4db-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d533db7490180107e4294bcae2b6d4db\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:41.109022 kubelet[2643]: I1123 23:05:41.108646 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d533db7490180107e4294bcae2b6d4db-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d533db7490180107e4294bcae2b6d4db\") " pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:41.897115 kubelet[2643]: I1123 23:05:41.897024 2643 apiserver.go:52] "Watching apiserver" Nov 23 23:05:41.906466 kubelet[2643]: I1123 23:05:41.906393 2643 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:05:41.951964 kubelet[2643]: I1123 23:05:41.951934 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:41.952211 kubelet[2643]: I1123 23:05:41.952127 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:41.952701 kubelet[2643]: I1123 23:05:41.952669 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:41.955031 kubelet[2643]: I1123 23:05:41.954967 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.954947655 podStartE2EDuration="954.947655ms" podCreationTimestamp="2025-11-23 23:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:05:41.953767508 +0000 UTC m=+1.124034173" watchObservedRunningTime="2025-11-23 23:05:41.954947655 +0000 UTC m=+1.125214240" Nov 23 23:05:41.963357 kubelet[2643]: E1123 23:05:41.963226 2643 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 23 23:05:41.963507 kubelet[2643]: E1123 23:05:41.963394 2643 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 23 23:05:41.964023 kubelet[2643]: E1123 23:05:41.963993 2643 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 23 23:05:41.979369 kubelet[2643]: I1123 23:05:41.979309 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.979288757 podStartE2EDuration="979.288757ms" podCreationTimestamp="2025-11-23 23:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:05:41.968779692 +0000 UTC m=+1.139046317" watchObservedRunningTime="2025-11-23 23:05:41.979288757 +0000 UTC m=+1.149555382" Nov 23 23:05:41.979515 kubelet[2643]: I1123 23:05:41.979452 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.979448737 podStartE2EDuration="979.448737ms" podCreationTimestamp="2025-11-23 23:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:05:41.979133863 +0000 UTC m=+1.149400488" watchObservedRunningTime="2025-11-23 23:05:41.979448737 +0000 UTC m=+1.149715362" Nov 23 23:05:45.614166 kubelet[2643]: I1123 23:05:45.614070 2643 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 23:05:45.614588 containerd[1501]: time="2025-11-23T23:05:45.614449144Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 23:05:45.614796 kubelet[2643]: I1123 23:05:45.614688 2643 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 23:05:46.410338 systemd[1]: Created slice kubepods-besteffort-podf76942a5_25c2_43c5_a9f5_1d9a54472e91.slice - libcontainer container kubepods-besteffort-podf76942a5_25c2_43c5_a9f5_1d9a54472e91.slice. Nov 23 23:05:46.440135 kubelet[2643]: I1123 23:05:46.440070 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f76942a5-25c2-43c5-a9f5-1d9a54472e91-kube-proxy\") pod \"kube-proxy-jg2tp\" (UID: \"f76942a5-25c2-43c5-a9f5-1d9a54472e91\") " pod="kube-system/kube-proxy-jg2tp" Nov 23 23:05:46.440135 kubelet[2643]: I1123 23:05:46.440133 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f76942a5-25c2-43c5-a9f5-1d9a54472e91-xtables-lock\") pod \"kube-proxy-jg2tp\" (UID: \"f76942a5-25c2-43c5-a9f5-1d9a54472e91\") " pod="kube-system/kube-proxy-jg2tp" Nov 23 23:05:46.440315 kubelet[2643]: I1123 23:05:46.440150 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f76942a5-25c2-43c5-a9f5-1d9a54472e91-lib-modules\") pod \"kube-proxy-jg2tp\" (UID: \"f76942a5-25c2-43c5-a9f5-1d9a54472e91\") " pod="kube-system/kube-proxy-jg2tp" Nov 23 23:05:46.440315 kubelet[2643]: I1123 23:05:46.440214 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hghfp\" (UniqueName: \"kubernetes.io/projected/f76942a5-25c2-43c5-a9f5-1d9a54472e91-kube-api-access-hghfp\") pod \"kube-proxy-jg2tp\" (UID: \"f76942a5-25c2-43c5-a9f5-1d9a54472e91\") " pod="kube-system/kube-proxy-jg2tp" Nov 23 23:05:46.724203 containerd[1501]: time="2025-11-23T23:05:46.723737839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jg2tp,Uid:f76942a5-25c2-43c5-a9f5-1d9a54472e91,Namespace:kube-system,Attempt:0,}" Nov 23 23:05:46.830951 systemd[1]: Created slice kubepods-besteffort-podb00da6f8_182f_4d8b_b181_c4d48c532f4d.slice - libcontainer container kubepods-besteffort-podb00da6f8_182f_4d8b_b181_c4d48c532f4d.slice. Nov 23 23:05:46.837024 containerd[1501]: time="2025-11-23T23:05:46.836978183Z" level=info msg="connecting to shim 9e4a6f19e4335205a06f3b1798e8b10306d9290aaf23cc9183b0b52acf651996" address="unix:///run/containerd/s/d7efd8504252743018007d44512b369c82f81d1928ff29ffc8f2e1a13c5564c5" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:46.841903 kubelet[2643]: I1123 23:05:46.841861 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b00da6f8-182f-4d8b-b181-c4d48c532f4d-var-lib-calico\") pod \"tigera-operator-7dcd859c48-5mhkk\" (UID: \"b00da6f8-182f-4d8b-b181-c4d48c532f4d\") " pod="tigera-operator/tigera-operator-7dcd859c48-5mhkk" Nov 23 23:05:46.841903 kubelet[2643]: I1123 23:05:46.841905 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxhlp\" (UniqueName: \"kubernetes.io/projected/b00da6f8-182f-4d8b-b181-c4d48c532f4d-kube-api-access-bxhlp\") pod \"tigera-operator-7dcd859c48-5mhkk\" (UID: \"b00da6f8-182f-4d8b-b181-c4d48c532f4d\") " pod="tigera-operator/tigera-operator-7dcd859c48-5mhkk" Nov 23 23:05:46.873979 systemd[1]: Started cri-containerd-9e4a6f19e4335205a06f3b1798e8b10306d9290aaf23cc9183b0b52acf651996.scope - libcontainer container 9e4a6f19e4335205a06f3b1798e8b10306d9290aaf23cc9183b0b52acf651996. Nov 23 23:05:46.897227 containerd[1501]: time="2025-11-23T23:05:46.897145551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jg2tp,Uid:f76942a5-25c2-43c5-a9f5-1d9a54472e91,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e4a6f19e4335205a06f3b1798e8b10306d9290aaf23cc9183b0b52acf651996\"" Nov 23 23:05:46.900213 containerd[1501]: time="2025-11-23T23:05:46.900174901Z" level=info msg="CreateContainer within sandbox \"9e4a6f19e4335205a06f3b1798e8b10306d9290aaf23cc9183b0b52acf651996\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 23:05:46.911305 containerd[1501]: time="2025-11-23T23:05:46.911247761Z" level=info msg="Container f5df5492966a5564d7a40443e65c4186731af4959086826a0dd8cf6d721c2092: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:46.919563 containerd[1501]: time="2025-11-23T23:05:46.919507447Z" level=info msg="CreateContainer within sandbox \"9e4a6f19e4335205a06f3b1798e8b10306d9290aaf23cc9183b0b52acf651996\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f5df5492966a5564d7a40443e65c4186731af4959086826a0dd8cf6d721c2092\"" Nov 23 23:05:46.920231 containerd[1501]: time="2025-11-23T23:05:46.920205688Z" level=info msg="StartContainer for \"f5df5492966a5564d7a40443e65c4186731af4959086826a0dd8cf6d721c2092\"" Nov 23 23:05:46.921645 containerd[1501]: time="2025-11-23T23:05:46.921615857Z" level=info msg="connecting to shim f5df5492966a5564d7a40443e65c4186731af4959086826a0dd8cf6d721c2092" address="unix:///run/containerd/s/d7efd8504252743018007d44512b369c82f81d1928ff29ffc8f2e1a13c5564c5" protocol=ttrpc version=3 Nov 23 23:05:46.945225 systemd[1]: Started cri-containerd-f5df5492966a5564d7a40443e65c4186731af4959086826a0dd8cf6d721c2092.scope - libcontainer container f5df5492966a5564d7a40443e65c4186731af4959086826a0dd8cf6d721c2092. Nov 23 23:05:47.036263 containerd[1501]: time="2025-11-23T23:05:47.036224746Z" level=info msg="StartContainer for \"f5df5492966a5564d7a40443e65c4186731af4959086826a0dd8cf6d721c2092\" returns successfully" Nov 23 23:05:47.137343 containerd[1501]: time="2025-11-23T23:05:47.137235403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5mhkk,Uid:b00da6f8-182f-4d8b-b181-c4d48c532f4d,Namespace:tigera-operator,Attempt:0,}" Nov 23 23:05:47.163574 containerd[1501]: time="2025-11-23T23:05:47.162950441Z" level=info msg="connecting to shim dd6ff5093b82919315318d352ac719cf4683b7f58edbbdd95a0571eb13d5eabb" address="unix:///run/containerd/s/4a9ccd2a4beadd33d4230b8336e4201b68feeed457b1ff7169c2fa20ba014ea0" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:05:47.186968 systemd[1]: Started cri-containerd-dd6ff5093b82919315318d352ac719cf4683b7f58edbbdd95a0571eb13d5eabb.scope - libcontainer container dd6ff5093b82919315318d352ac719cf4683b7f58edbbdd95a0571eb13d5eabb. Nov 23 23:05:47.230570 containerd[1501]: time="2025-11-23T23:05:47.230526538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5mhkk,Uid:b00da6f8-182f-4d8b-b181-c4d48c532f4d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dd6ff5093b82919315318d352ac719cf4683b7f58edbbdd95a0571eb13d5eabb\"" Nov 23 23:05:47.232496 containerd[1501]: time="2025-11-23T23:05:47.232459001Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 23:05:47.983812 kubelet[2643]: I1123 23:05:47.983527 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jg2tp" podStartSLOduration=1.98320121 podStartE2EDuration="1.98320121s" podCreationTimestamp="2025-11-23 23:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:05:47.983164828 +0000 UTC m=+7.153431413" watchObservedRunningTime="2025-11-23 23:05:47.98320121 +0000 UTC m=+7.153467835" Nov 23 23:05:48.072055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1061784781.mount: Deactivated successfully. Nov 23 23:05:48.549523 containerd[1501]: time="2025-11-23T23:05:48.549478303Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:48.550267 containerd[1501]: time="2025-11-23T23:05:48.550130184Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 23:05:48.552867 containerd[1501]: time="2025-11-23T23:05:48.552831120Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:48.565429 containerd[1501]: time="2025-11-23T23:05:48.565368347Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:05:48.565972 containerd[1501]: time="2025-11-23T23:05:48.565930699Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.333431114s" Nov 23 23:05:48.566024 containerd[1501]: time="2025-11-23T23:05:48.565972882Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 23:05:48.570216 containerd[1501]: time="2025-11-23T23:05:48.570169767Z" level=info msg="CreateContainer within sandbox \"dd6ff5093b82919315318d352ac719cf4683b7f58edbbdd95a0571eb13d5eabb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 23:05:48.578384 containerd[1501]: time="2025-11-23T23:05:48.577128183Z" level=info msg="Container 70154369d4e4636be8f8814594e0990d17dbba0cab65ea0d1b45e2559794d36d: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:48.585436 containerd[1501]: time="2025-11-23T23:05:48.585375913Z" level=info msg="CreateContainer within sandbox \"dd6ff5093b82919315318d352ac719cf4683b7f58edbbdd95a0571eb13d5eabb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"70154369d4e4636be8f8814594e0990d17dbba0cab65ea0d1b45e2559794d36d\"" Nov 23 23:05:48.586278 containerd[1501]: time="2025-11-23T23:05:48.586239351Z" level=info msg="StartContainer for \"70154369d4e4636be8f8814594e0990d17dbba0cab65ea0d1b45e2559794d36d\"" Nov 23 23:05:48.587377 containerd[1501]: time="2025-11-23T23:05:48.587257035Z" level=info msg="connecting to shim 70154369d4e4636be8f8814594e0990d17dbba0cab65ea0d1b45e2559794d36d" address="unix:///run/containerd/s/4a9ccd2a4beadd33d4230b8336e4201b68feeed457b1ff7169c2fa20ba014ea0" protocol=ttrpc version=3 Nov 23 23:05:48.618035 systemd[1]: Started cri-containerd-70154369d4e4636be8f8814594e0990d17dbba0cab65ea0d1b45e2559794d36d.scope - libcontainer container 70154369d4e4636be8f8814594e0990d17dbba0cab65ea0d1b45e2559794d36d. Nov 23 23:05:48.649039 containerd[1501]: time="2025-11-23T23:05:48.648928006Z" level=info msg="StartContainer for \"70154369d4e4636be8f8814594e0990d17dbba0cab65ea0d1b45e2559794d36d\" returns successfully" Nov 23 23:05:48.987184 kubelet[2643]: I1123 23:05:48.987029 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-5mhkk" podStartSLOduration=1.650161014 podStartE2EDuration="2.986996244s" podCreationTimestamp="2025-11-23 23:05:46 +0000 UTC" firstStartedPulling="2025-11-23 23:05:47.231684463 +0000 UTC m=+6.401951088" lastFinishedPulling="2025-11-23 23:05:48.568519693 +0000 UTC m=+7.738786318" observedRunningTime="2025-11-23 23:05:48.983350824 +0000 UTC m=+8.153617449" watchObservedRunningTime="2025-11-23 23:05:48.986996244 +0000 UTC m=+8.157262869" Nov 23 23:05:54.285461 sudo[1709]: pam_unix(sudo:session): session closed for user root Nov 23 23:05:54.288679 sshd[1708]: Connection closed by 10.0.0.1 port 52162 Nov 23 23:05:54.289171 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Nov 23 23:05:54.295258 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:52162.service: Deactivated successfully. Nov 23 23:05:54.298719 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 23:05:54.299362 systemd[1]: session-7.scope: Consumed 6.991s CPU time, 222.3M memory peak. Nov 23 23:05:54.301675 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Nov 23 23:05:54.303866 systemd-logind[1486]: Removed session 7. Nov 23 23:05:55.406854 update_engine[1489]: I20251123 23:05:55.406778 1489 update_attempter.cc:509] Updating boot flags... Nov 23 23:06:01.814010 systemd[1]: Created slice kubepods-besteffort-pod05ca4965_e53d_45e3_8bf9_87a63f6f9957.slice - libcontainer container kubepods-besteffort-pod05ca4965_e53d_45e3_8bf9_87a63f6f9957.slice. Nov 23 23:06:01.842872 kubelet[2643]: I1123 23:06:01.842827 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/05ca4965-e53d-45e3-8bf9-87a63f6f9957-typha-certs\") pod \"calico-typha-64fc879bd8-rbj5j\" (UID: \"05ca4965-e53d-45e3-8bf9-87a63f6f9957\") " pod="calico-system/calico-typha-64fc879bd8-rbj5j" Nov 23 23:06:01.842872 kubelet[2643]: I1123 23:06:01.842877 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05ca4965-e53d-45e3-8bf9-87a63f6f9957-tigera-ca-bundle\") pod \"calico-typha-64fc879bd8-rbj5j\" (UID: \"05ca4965-e53d-45e3-8bf9-87a63f6f9957\") " pod="calico-system/calico-typha-64fc879bd8-rbj5j" Nov 23 23:06:01.843349 kubelet[2643]: I1123 23:06:01.842899 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sphzj\" (UniqueName: \"kubernetes.io/projected/05ca4965-e53d-45e3-8bf9-87a63f6f9957-kube-api-access-sphzj\") pod \"calico-typha-64fc879bd8-rbj5j\" (UID: \"05ca4965-e53d-45e3-8bf9-87a63f6f9957\") " pod="calico-system/calico-typha-64fc879bd8-rbj5j" Nov 23 23:06:02.024441 systemd[1]: Created slice kubepods-besteffort-podaf7425f1_6bf8_45ed_8d5f_59963ec6aeab.slice - libcontainer container kubepods-besteffort-podaf7425f1_6bf8_45ed_8d5f_59963ec6aeab.slice. Nov 23 23:06:02.044575 kubelet[2643]: I1123 23:06:02.044529 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/af7425f1-6bf8-45ed-8d5f-59963ec6aeab-cni-net-dir\") pod \"calico-node-8n64q\" (UID: \"af7425f1-6bf8-45ed-8d5f-59963ec6aeab\") " pod="calico-system/calico-node-8n64q" Nov 23 23:06:02.044575 kubelet[2643]: I1123 23:06:02.044575 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af7425f1-6bf8-45ed-8d5f-59963ec6aeab-lib-modules\") pod \"calico-node-8n64q\" (UID: \"af7425f1-6bf8-45ed-8d5f-59963ec6aeab\") " pod="calico-system/calico-node-8n64q" Nov 23 23:06:02.044897 kubelet[2643]: I1123 23:06:02.044593 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/af7425f1-6bf8-45ed-8d5f-59963ec6aeab-var-run-calico\") pod \"calico-node-8n64q\" (UID: \"af7425f1-6bf8-45ed-8d5f-59963ec6aeab\") " pod="calico-system/calico-node-8n64q" Nov 23 23:06:02.044897 kubelet[2643]: I1123 23:06:02.044615 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/af7425f1-6bf8-45ed-8d5f-59963ec6aeab-flexvol-driver-host\") pod \"calico-node-8n64q\" (UID: \"af7425f1-6bf8-45ed-8d5f-59963ec6aeab\") " pod="calico-system/calico-node-8n64q" Nov 23 23:06:02.044897 kubelet[2643]: I1123 23:06:02.044636 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/af7425f1-6bf8-45ed-8d5f-59963ec6aeab-node-certs\") pod \"calico-node-8n64q\" (UID: \"af7425f1-6bf8-45ed-8d5f-59963ec6aeab\") " pod="calico-system/calico-node-8n64q" Nov 23 23:06:02.044897 kubelet[2643]: I1123 23:06:02.044653 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af7425f1-6bf8-45ed-8d5f-59963ec6aeab-var-lib-calico\") pod \"calico-node-8n64q\" (UID: \"af7425f1-6bf8-45ed-8d5f-59963ec6aeab\") " pod="calico-system/calico-node-8n64q" Nov 23 23:06:02.045004 kubelet[2643]: I1123 23:06:02.044733 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af7425f1-6bf8-45ed-8d5f-59963ec6aeab-xtables-lock\") pod \"calico-node-8n64q\" (UID: \"af7425f1-6bf8-45ed-8d5f-59963ec6aeab\") " pod="calico-system/calico-node-8n64q" Nov 23 23:06:02.045004 kubelet[2643]: I1123 23:06:02.044958 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlpkg\" (UniqueName: \"kubernetes.io/projected/af7425f1-6bf8-45ed-8d5f-59963ec6aeab-kube-api-access-tlpkg\") pod \"calico-node-8n64q\" (UID: \"af7425f1-6bf8-45ed-8d5f-59963ec6aeab\") " pod="calico-system/calico-node-8n64q" Nov 23 23:06:02.045004 kubelet[2643]: I1123 23:06:02.044985 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/af7425f1-6bf8-45ed-8d5f-59963ec6aeab-cni-log-dir\") pod \"calico-node-8n64q\" (UID: \"af7425f1-6bf8-45ed-8d5f-59963ec6aeab\") " pod="calico-system/calico-node-8n64q" Nov 23 23:06:02.045076 kubelet[2643]: I1123 23:06:02.045038 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/af7425f1-6bf8-45ed-8d5f-59963ec6aeab-policysync\") pod \"calico-node-8n64q\" (UID: \"af7425f1-6bf8-45ed-8d5f-59963ec6aeab\") " pod="calico-system/calico-node-8n64q" Nov 23 23:06:02.045076 kubelet[2643]: I1123 23:06:02.045057 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/af7425f1-6bf8-45ed-8d5f-59963ec6aeab-cni-bin-dir\") pod \"calico-node-8n64q\" (UID: \"af7425f1-6bf8-45ed-8d5f-59963ec6aeab\") " pod="calico-system/calico-node-8n64q" Nov 23 23:06:02.045247 kubelet[2643]: I1123 23:06:02.045074 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af7425f1-6bf8-45ed-8d5f-59963ec6aeab-tigera-ca-bundle\") pod \"calico-node-8n64q\" (UID: \"af7425f1-6bf8-45ed-8d5f-59963ec6aeab\") " pod="calico-system/calico-node-8n64q" Nov 23 23:06:02.119229 containerd[1501]: time="2025-11-23T23:06:02.118649502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64fc879bd8-rbj5j,Uid:05ca4965-e53d-45e3-8bf9-87a63f6f9957,Namespace:calico-system,Attempt:0,}" Nov 23 23:06:02.149732 kubelet[2643]: E1123 23:06:02.149685 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.149732 kubelet[2643]: W1123 23:06:02.149721 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.155731 kubelet[2643]: E1123 23:06:02.155625 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.155731 kubelet[2643]: W1123 23:06:02.155682 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.155731 kubelet[2643]: E1123 23:06:02.155728 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.158890 kubelet[2643]: E1123 23:06:02.158826 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.169381 kubelet[2643]: E1123 23:06:02.169013 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2c6t" podUID="727c5b39-08b3-46f8-a2e4-48219c6016f9" Nov 23 23:06:02.175857 kubelet[2643]: E1123 23:06:02.175824 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.175857 kubelet[2643]: W1123 23:06:02.175849 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.175857 kubelet[2643]: E1123 23:06:02.175870 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.191098 containerd[1501]: time="2025-11-23T23:06:02.191017352Z" level=info msg="connecting to shim 1f3cea756a16bdc26c5de1d801bb1073946d8f29f4965c166046322c4881865d" address="unix:///run/containerd/s/7ef3152b364a8c431a380fbac93b509be0103219ee35cabbdcdb8ed4667997b6" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:06:02.236462 kubelet[2643]: E1123 23:06:02.236246 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.236462 kubelet[2643]: W1123 23:06:02.236282 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.245486 kubelet[2643]: E1123 23:06:02.245391 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.246090 kubelet[2643]: E1123 23:06:02.246073 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.246157 kubelet[2643]: W1123 23:06:02.246090 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.246157 kubelet[2643]: E1123 23:06:02.246148 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.246300 kubelet[2643]: E1123 23:06:02.246290 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.246300 kubelet[2643]: W1123 23:06:02.246301 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.246358 kubelet[2643]: E1123 23:06:02.246309 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.246462 kubelet[2643]: E1123 23:06:02.246452 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.246462 kubelet[2643]: W1123 23:06:02.246462 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.246525 kubelet[2643]: E1123 23:06:02.246470 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.246625 kubelet[2643]: E1123 23:06:02.246615 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.246625 kubelet[2643]: W1123 23:06:02.246625 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.246680 kubelet[2643]: E1123 23:06:02.246634 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.246848 kubelet[2643]: E1123 23:06:02.246822 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.246848 kubelet[2643]: W1123 23:06:02.246835 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.246848 kubelet[2643]: E1123 23:06:02.246845 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.247032 kubelet[2643]: E1123 23:06:02.247008 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.247032 kubelet[2643]: W1123 23:06:02.247019 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.247032 kubelet[2643]: E1123 23:06:02.247029 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.247503 kubelet[2643]: E1123 23:06:02.247457 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.247547 kubelet[2643]: W1123 23:06:02.247532 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.247580 kubelet[2643]: E1123 23:06:02.247547 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.247985 kubelet[2643]: E1123 23:06:02.247963 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.248055 kubelet[2643]: W1123 23:06:02.247981 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.248055 kubelet[2643]: E1123 23:06:02.248013 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.248055 kubelet[2643]: I1123 23:06:02.248041 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/727c5b39-08b3-46f8-a2e4-48219c6016f9-kubelet-dir\") pod \"csi-node-driver-n2c6t\" (UID: \"727c5b39-08b3-46f8-a2e4-48219c6016f9\") " pod="calico-system/csi-node-driver-n2c6t" Nov 23 23:06:02.248257 kubelet[2643]: E1123 23:06:02.248241 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.248257 kubelet[2643]: W1123 23:06:02.248254 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.248433 kubelet[2643]: E1123 23:06:02.248267 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.248433 kubelet[2643]: I1123 23:06:02.248282 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/727c5b39-08b3-46f8-a2e4-48219c6016f9-registration-dir\") pod \"csi-node-driver-n2c6t\" (UID: \"727c5b39-08b3-46f8-a2e4-48219c6016f9\") " pod="calico-system/csi-node-driver-n2c6t" Nov 23 23:06:02.248793 kubelet[2643]: E1123 23:06:02.248638 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.248793 kubelet[2643]: W1123 23:06:02.248652 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.248793 kubelet[2643]: E1123 23:06:02.248667 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.249094 kubelet[2643]: E1123 23:06:02.248989 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.249094 kubelet[2643]: W1123 23:06:02.249050 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.249094 kubelet[2643]: E1123 23:06:02.249067 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.249572 kubelet[2643]: E1123 23:06:02.249553 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.249622 kubelet[2643]: W1123 23:06:02.249593 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.249622 kubelet[2643]: E1123 23:06:02.249616 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.250304 kubelet[2643]: E1123 23:06:02.250283 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.250360 kubelet[2643]: W1123 23:06:02.250304 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.250450 kubelet[2643]: E1123 23:06:02.250430 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.250722 kubelet[2643]: E1123 23:06:02.250700 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.250847 kubelet[2643]: W1123 23:06:02.250743 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.250847 kubelet[2643]: E1123 23:06:02.250773 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.251032 systemd[1]: Started cri-containerd-1f3cea756a16bdc26c5de1d801bb1073946d8f29f4965c166046322c4881865d.scope - libcontainer container 1f3cea756a16bdc26c5de1d801bb1073946d8f29f4965c166046322c4881865d. Nov 23 23:06:02.251388 kubelet[2643]: E1123 23:06:02.251362 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.251388 kubelet[2643]: W1123 23:06:02.251384 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.251470 kubelet[2643]: E1123 23:06:02.251402 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.253503 kubelet[2643]: E1123 23:06:02.253479 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.253503 kubelet[2643]: W1123 23:06:02.253503 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.253626 kubelet[2643]: E1123 23:06:02.253525 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.254115 kubelet[2643]: E1123 23:06:02.254098 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.254115 kubelet[2643]: W1123 23:06:02.254115 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.254200 kubelet[2643]: E1123 23:06:02.254127 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.254412 kubelet[2643]: E1123 23:06:02.254381 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.254412 kubelet[2643]: W1123 23:06:02.254402 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.254412 kubelet[2643]: E1123 23:06:02.254411 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.254563 kubelet[2643]: E1123 23:06:02.254552 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.254598 kubelet[2643]: W1123 23:06:02.254569 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.254598 kubelet[2643]: E1123 23:06:02.254579 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.254943 kubelet[2643]: E1123 23:06:02.254858 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.254943 kubelet[2643]: W1123 23:06:02.254888 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.254943 kubelet[2643]: E1123 23:06:02.254900 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.255987 kubelet[2643]: E1123 23:06:02.255648 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.255987 kubelet[2643]: W1123 23:06:02.255673 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.255987 kubelet[2643]: E1123 23:06:02.255691 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.255987 kubelet[2643]: E1123 23:06:02.255962 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.255987 kubelet[2643]: W1123 23:06:02.255972 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.255987 kubelet[2643]: E1123 23:06:02.255981 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.256248 kubelet[2643]: E1123 23:06:02.256124 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.256248 kubelet[2643]: W1123 23:06:02.256133 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.256248 kubelet[2643]: E1123 23:06:02.256141 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.256332 kubelet[2643]: E1123 23:06:02.256296 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.256332 kubelet[2643]: W1123 23:06:02.256304 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.256332 kubelet[2643]: E1123 23:06:02.256313 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.256543 kubelet[2643]: E1123 23:06:02.256452 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.256543 kubelet[2643]: W1123 23:06:02.256499 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.256799 kubelet[2643]: E1123 23:06:02.256508 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.298585 containerd[1501]: time="2025-11-23T23:06:02.298534015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64fc879bd8-rbj5j,Uid:05ca4965-e53d-45e3-8bf9-87a63f6f9957,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f3cea756a16bdc26c5de1d801bb1073946d8f29f4965c166046322c4881865d\"" Nov 23 23:06:02.310311 containerd[1501]: time="2025-11-23T23:06:02.310175910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 23:06:02.328609 containerd[1501]: time="2025-11-23T23:06:02.328565279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8n64q,Uid:af7425f1-6bf8-45ed-8d5f-59963ec6aeab,Namespace:calico-system,Attempt:0,}" Nov 23 23:06:02.349424 kubelet[2643]: E1123 23:06:02.349388 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.349774 kubelet[2643]: W1123 23:06:02.349596 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.349774 kubelet[2643]: E1123 23:06:02.349627 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.349774 kubelet[2643]: I1123 23:06:02.349683 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6mkm\" (UniqueName: \"kubernetes.io/projected/727c5b39-08b3-46f8-a2e4-48219c6016f9-kube-api-access-d6mkm\") pod \"csi-node-driver-n2c6t\" (UID: \"727c5b39-08b3-46f8-a2e4-48219c6016f9\") " pod="calico-system/csi-node-driver-n2c6t" Nov 23 23:06:02.350038 kubelet[2643]: E1123 23:06:02.350020 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.350166 kubelet[2643]: W1123 23:06:02.350094 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.350166 kubelet[2643]: E1123 23:06:02.350123 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.350433 kubelet[2643]: E1123 23:06:02.350413 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.350433 kubelet[2643]: W1123 23:06:02.350432 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.350588 kubelet[2643]: E1123 23:06:02.350454 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.350928 kubelet[2643]: E1123 23:06:02.350850 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.350928 kubelet[2643]: W1123 23:06:02.350866 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.350928 kubelet[2643]: E1123 23:06:02.350884 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.353057 kubelet[2643]: E1123 23:06:02.352969 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.353057 kubelet[2643]: W1123 23:06:02.352997 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.353057 kubelet[2643]: E1123 23:06:02.353028 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.353652 kubelet[2643]: E1123 23:06:02.353452 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.353652 kubelet[2643]: W1123 23:06:02.353477 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.353652 kubelet[2643]: E1123 23:06:02.353537 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.353982 kubelet[2643]: E1123 23:06:02.353700 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.353982 kubelet[2643]: W1123 23:06:02.353710 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.353982 kubelet[2643]: E1123 23:06:02.353788 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.354129 kubelet[2643]: E1123 23:06:02.354110 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.354216 kubelet[2643]: W1123 23:06:02.354197 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.354333 kubelet[2643]: E1123 23:06:02.354273 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.354708 kubelet[2643]: E1123 23:06:02.354648 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.354708 kubelet[2643]: W1123 23:06:02.354687 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.354885 kubelet[2643]: E1123 23:06:02.354787 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.356959 kubelet[2643]: I1123 23:06:02.356810 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/727c5b39-08b3-46f8-a2e4-48219c6016f9-varrun\") pod \"csi-node-driver-n2c6t\" (UID: \"727c5b39-08b3-46f8-a2e4-48219c6016f9\") " pod="calico-system/csi-node-driver-n2c6t" Nov 23 23:06:02.357702 containerd[1501]: time="2025-11-23T23:06:02.357153378Z" level=info msg="connecting to shim a64c5ab01df4099eb5d5e3ddd2470ecc4b2a0a65357339537b6aa36fefbfdfac" address="unix:///run/containerd/s/9d2d41c876883e78715339d359d0b2232a0de5c1b1cb6ef8675d76aafd5b7aba" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:06:02.357772 kubelet[2643]: E1123 23:06:02.357709 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.357772 kubelet[2643]: W1123 23:06:02.357746 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.357882 kubelet[2643]: E1123 23:06:02.357779 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.357992 kubelet[2643]: E1123 23:06:02.357952 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.357992 kubelet[2643]: W1123 23:06:02.357979 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.358055 kubelet[2643]: E1123 23:06:02.358017 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.358309 kubelet[2643]: E1123 23:06:02.358197 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.358622 kubelet[2643]: W1123 23:06:02.358596 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.358733 kubelet[2643]: E1123 23:06:02.358706 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.359511 kubelet[2643]: E1123 23:06:02.359491 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.359511 kubelet[2643]: W1123 23:06:02.359508 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.359826 kubelet[2643]: E1123 23:06:02.359795 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.360949 kubelet[2643]: E1123 23:06:02.360918 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.360949 kubelet[2643]: W1123 23:06:02.360939 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.361103 kubelet[2643]: E1123 23:06:02.361033 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.361103 kubelet[2643]: I1123 23:06:02.361072 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/727c5b39-08b3-46f8-a2e4-48219c6016f9-socket-dir\") pod \"csi-node-driver-n2c6t\" (UID: \"727c5b39-08b3-46f8-a2e4-48219c6016f9\") " pod="calico-system/csi-node-driver-n2c6t" Nov 23 23:06:02.361272 kubelet[2643]: E1123 23:06:02.361256 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.361311 kubelet[2643]: W1123 23:06:02.361269 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.361352 kubelet[2643]: E1123 23:06:02.361321 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.362410 kubelet[2643]: E1123 23:06:02.362385 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.362410 kubelet[2643]: W1123 23:06:02.362404 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.362525 kubelet[2643]: E1123 23:06:02.362491 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.363721 kubelet[2643]: E1123 23:06:02.363694 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.363721 kubelet[2643]: W1123 23:06:02.363714 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.363838 kubelet[2643]: E1123 23:06:02.363783 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.364953 kubelet[2643]: E1123 23:06:02.364918 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.364953 kubelet[2643]: W1123 23:06:02.364944 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.365049 kubelet[2643]: E1123 23:06:02.364970 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.365575 kubelet[2643]: E1123 23:06:02.365549 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.365575 kubelet[2643]: W1123 23:06:02.365569 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.365670 kubelet[2643]: E1123 23:06:02.365585 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.386067 systemd[1]: Started cri-containerd-a64c5ab01df4099eb5d5e3ddd2470ecc4b2a0a65357339537b6aa36fefbfdfac.scope - libcontainer container a64c5ab01df4099eb5d5e3ddd2470ecc4b2a0a65357339537b6aa36fefbfdfac. Nov 23 23:06:02.427947 containerd[1501]: time="2025-11-23T23:06:02.427876740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8n64q,Uid:af7425f1-6bf8-45ed-8d5f-59963ec6aeab,Namespace:calico-system,Attempt:0,} returns sandbox id \"a64c5ab01df4099eb5d5e3ddd2470ecc4b2a0a65357339537b6aa36fefbfdfac\"" Nov 23 23:06:02.464131 kubelet[2643]: E1123 23:06:02.464039 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.464131 kubelet[2643]: W1123 23:06:02.464066 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.464131 kubelet[2643]: E1123 23:06:02.464095 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.465012 kubelet[2643]: E1123 23:06:02.464319 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.465012 kubelet[2643]: W1123 23:06:02.464332 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.465012 kubelet[2643]: E1123 23:06:02.464343 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.465012 kubelet[2643]: E1123 23:06:02.464522 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.465012 kubelet[2643]: W1123 23:06:02.464531 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.465012 kubelet[2643]: E1123 23:06:02.464545 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.465012 kubelet[2643]: E1123 23:06:02.464809 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.465012 kubelet[2643]: W1123 23:06:02.464821 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.465012 kubelet[2643]: E1123 23:06:02.464842 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.465465 kubelet[2643]: E1123 23:06:02.465344 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.465465 kubelet[2643]: W1123 23:06:02.465358 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.465465 kubelet[2643]: E1123 23:06:02.465377 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.465626 kubelet[2643]: E1123 23:06:02.465614 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.465746 kubelet[2643]: W1123 23:06:02.465662 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.465746 kubelet[2643]: E1123 23:06:02.465718 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.466511 kubelet[2643]: E1123 23:06:02.466118 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.466774 kubelet[2643]: W1123 23:06:02.466616 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.466774 kubelet[2643]: E1123 23:06:02.466659 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.467073 kubelet[2643]: E1123 23:06:02.467055 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.467275 kubelet[2643]: W1123 23:06:02.467173 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.467275 kubelet[2643]: E1123 23:06:02.467229 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.467612 kubelet[2643]: E1123 23:06:02.467509 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.467612 kubelet[2643]: W1123 23:06:02.467524 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.467612 kubelet[2643]: E1123 23:06:02.467562 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.468021 kubelet[2643]: E1123 23:06:02.468005 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.468267 kubelet[2643]: W1123 23:06:02.468247 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.468496 kubelet[2643]: E1123 23:06:02.468369 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.468875 kubelet[2643]: E1123 23:06:02.468808 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.469082 kubelet[2643]: W1123 23:06:02.468964 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.469082 kubelet[2643]: E1123 23:06:02.469013 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.469239 kubelet[2643]: E1123 23:06:02.469225 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.469299 kubelet[2643]: W1123 23:06:02.469288 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.469358 kubelet[2643]: E1123 23:06:02.469347 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.469649 kubelet[2643]: E1123 23:06:02.469550 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.469649 kubelet[2643]: W1123 23:06:02.469562 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.469649 kubelet[2643]: E1123 23:06:02.469580 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.470535 kubelet[2643]: E1123 23:06:02.470489 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.470535 kubelet[2643]: W1123 23:06:02.470509 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.470535 kubelet[2643]: E1123 23:06:02.470527 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.471060 kubelet[2643]: E1123 23:06:02.470999 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.471060 kubelet[2643]: W1123 23:06:02.471036 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.471228 kubelet[2643]: E1123 23:06:02.471188 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:02.481973 kubelet[2643]: E1123 23:06:02.481942 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:02.481973 kubelet[2643]: W1123 23:06:02.481964 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:02.481973 kubelet[2643]: E1123 23:06:02.481983 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:03.273825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365164791.mount: Deactivated successfully. Nov 23 23:06:03.705412 containerd[1501]: time="2025-11-23T23:06:03.705284952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:06:03.707213 containerd[1501]: time="2025-11-23T23:06:03.706947982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 23 23:06:03.708576 containerd[1501]: time="2025-11-23T23:06:03.708526355Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:06:03.712501 containerd[1501]: time="2025-11-23T23:06:03.712447620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:06:03.714243 containerd[1501]: time="2025-11-23T23:06:03.713322724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.403103405s" Nov 23 23:06:03.714243 containerd[1501]: time="2025-11-23T23:06:03.713357772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 23:06:03.718798 containerd[1501]: time="2025-11-23T23:06:03.714995996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 23:06:03.738006 containerd[1501]: time="2025-11-23T23:06:03.737948909Z" level=info msg="CreateContainer within sandbox \"1f3cea756a16bdc26c5de1d801bb1073946d8f29f4965c166046322c4881865d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 23:06:03.746538 containerd[1501]: time="2025-11-23T23:06:03.746468542Z" level=info msg="Container a34d96f484beb793c7b9e8b1b9d3f87d8f31c4cc9f7f18c47e4b646f68ae22f9: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:06:03.779453 containerd[1501]: time="2025-11-23T23:06:03.779313297Z" level=info msg="CreateContainer within sandbox \"1f3cea756a16bdc26c5de1d801bb1073946d8f29f4965c166046322c4881865d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a34d96f484beb793c7b9e8b1b9d3f87d8f31c4cc9f7f18c47e4b646f68ae22f9\"" Nov 23 23:06:03.779983 containerd[1501]: time="2025-11-23T23:06:03.779948310Z" level=info msg="StartContainer for \"a34d96f484beb793c7b9e8b1b9d3f87d8f31c4cc9f7f18c47e4b646f68ae22f9\"" Nov 23 23:06:03.781801 containerd[1501]: time="2025-11-23T23:06:03.781723444Z" level=info msg="connecting to shim a34d96f484beb793c7b9e8b1b9d3f87d8f31c4cc9f7f18c47e4b646f68ae22f9" address="unix:///run/containerd/s/7ef3152b364a8c431a380fbac93b509be0103219ee35cabbdcdb8ed4667997b6" protocol=ttrpc version=3 Nov 23 23:06:03.811022 systemd[1]: Started cri-containerd-a34d96f484beb793c7b9e8b1b9d3f87d8f31c4cc9f7f18c47e4b646f68ae22f9.scope - libcontainer container a34d96f484beb793c7b9e8b1b9d3f87d8f31c4cc9f7f18c47e4b646f68ae22f9. Nov 23 23:06:03.867016 containerd[1501]: time="2025-11-23T23:06:03.866956427Z" level=info msg="StartContainer for \"a34d96f484beb793c7b9e8b1b9d3f87d8f31c4cc9f7f18c47e4b646f68ae22f9\" returns successfully" Nov 23 23:06:03.935243 kubelet[2643]: E1123 23:06:03.935176 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2c6t" podUID="727c5b39-08b3-46f8-a2e4-48219c6016f9" Nov 23 23:06:04.065667 kubelet[2643]: I1123 23:06:04.065580 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64fc879bd8-rbj5j" podStartSLOduration=1.6581839600000001 podStartE2EDuration="3.065561705s" podCreationTimestamp="2025-11-23 23:06:01 +0000 UTC" firstStartedPulling="2025-11-23 23:06:02.307364998 +0000 UTC m=+21.477631583" lastFinishedPulling="2025-11-23 23:06:03.714742703 +0000 UTC m=+22.885009328" observedRunningTime="2025-11-23 23:06:04.065429799 +0000 UTC m=+23.235696424" watchObservedRunningTime="2025-11-23 23:06:04.065561705 +0000 UTC m=+23.235828290" Nov 23 23:06:04.071817 kubelet[2643]: E1123 23:06:04.071768 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.071817 kubelet[2643]: W1123 23:06:04.071803 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.071817 kubelet[2643]: E1123 23:06:04.071830 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.073132 kubelet[2643]: E1123 23:06:04.073088 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.073224 kubelet[2643]: W1123 23:06:04.073129 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.073266 kubelet[2643]: E1123 23:06:04.073194 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.073966 kubelet[2643]: E1123 23:06:04.073924 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.073966 kubelet[2643]: W1123 23:06:04.073952 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.074072 kubelet[2643]: E1123 23:06:04.073979 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.074943 kubelet[2643]: E1123 23:06:04.074848 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.074943 kubelet[2643]: W1123 23:06:04.074934 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.075097 kubelet[2643]: E1123 23:06:04.074955 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.077234 kubelet[2643]: E1123 23:06:04.077178 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.077234 kubelet[2643]: W1123 23:06:04.077208 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.077234 kubelet[2643]: E1123 23:06:04.077233 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.077539 kubelet[2643]: E1123 23:06:04.077515 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.077539 kubelet[2643]: W1123 23:06:04.077531 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.077687 kubelet[2643]: E1123 23:06:04.077545 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.077771 kubelet[2643]: E1123 23:06:04.077739 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.077810 kubelet[2643]: W1123 23:06:04.077778 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.077810 kubelet[2643]: E1123 23:06:04.077789 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.078646 kubelet[2643]: E1123 23:06:04.078616 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.078646 kubelet[2643]: W1123 23:06:04.078636 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.078646 kubelet[2643]: E1123 23:06:04.078653 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.079050 kubelet[2643]: E1123 23:06:04.079020 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.079050 kubelet[2643]: W1123 23:06:04.079034 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.079050 kubelet[2643]: E1123 23:06:04.079046 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.079279 kubelet[2643]: E1123 23:06:04.079257 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.079320 kubelet[2643]: W1123 23:06:04.079294 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.079320 kubelet[2643]: E1123 23:06:04.079308 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.080250 kubelet[2643]: E1123 23:06:04.080219 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.080501 kubelet[2643]: W1123 23:06:04.080470 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.080501 kubelet[2643]: E1123 23:06:04.080496 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.083088 kubelet[2643]: E1123 23:06:04.082980 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.083088 kubelet[2643]: W1123 23:06:04.083078 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.083088 kubelet[2643]: E1123 23:06:04.083097 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.083647 kubelet[2643]: E1123 23:06:04.083575 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.083647 kubelet[2643]: W1123 23:06:04.083593 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.083647 kubelet[2643]: E1123 23:06:04.083606 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.084160 kubelet[2643]: E1123 23:06:04.084127 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.084160 kubelet[2643]: W1123 23:06:04.084143 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.084374 kubelet[2643]: E1123 23:06:04.084270 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.084724 kubelet[2643]: E1123 23:06:04.084697 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.084724 kubelet[2643]: W1123 23:06:04.084717 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.085146 kubelet[2643]: E1123 23:06:04.084730 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.085371 kubelet[2643]: E1123 23:06:04.085313 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.085507 kubelet[2643]: W1123 23:06:04.085491 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.085654 kubelet[2643]: E1123 23:06:04.085620 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.086274 kubelet[2643]: E1123 23:06:04.086069 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.086274 kubelet[2643]: W1123 23:06:04.086087 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.086274 kubelet[2643]: E1123 23:06:04.086118 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.086497 kubelet[2643]: E1123 23:06:04.086453 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.086497 kubelet[2643]: W1123 23:06:04.086496 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.086675 kubelet[2643]: E1123 23:06:04.086515 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.087203 kubelet[2643]: E1123 23:06:04.087177 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.087203 kubelet[2643]: W1123 23:06:04.087196 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.087354 kubelet[2643]: E1123 23:06:04.087221 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.087486 kubelet[2643]: E1123 23:06:04.087448 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.087486 kubelet[2643]: W1123 23:06:04.087466 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.087629 kubelet[2643]: E1123 23:06:04.087542 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.087629 kubelet[2643]: E1123 23:06:04.087616 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.087629 kubelet[2643]: W1123 23:06:04.087625 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.087804 kubelet[2643]: E1123 23:06:04.087777 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.088477 kubelet[2643]: E1123 23:06:04.088446 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.088477 kubelet[2643]: W1123 23:06:04.088473 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.088801 kubelet[2643]: E1123 23:06:04.088586 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.089079 kubelet[2643]: E1123 23:06:04.088841 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.089079 kubelet[2643]: W1123 23:06:04.088900 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.089079 kubelet[2643]: E1123 23:06:04.088925 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.089319 kubelet[2643]: E1123 23:06:04.089275 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.089319 kubelet[2643]: W1123 23:06:04.089299 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.089319 kubelet[2643]: E1123 23:06:04.089317 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.090797 kubelet[2643]: E1123 23:06:04.090726 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.090797 kubelet[2643]: W1123 23:06:04.090769 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.090797 kubelet[2643]: E1123 23:06:04.090797 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.091410 kubelet[2643]: E1123 23:06:04.091381 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.091410 kubelet[2643]: W1123 23:06:04.091404 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.092825 kubelet[2643]: E1123 23:06:04.091679 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.093184 kubelet[2643]: E1123 23:06:04.093146 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.093184 kubelet[2643]: W1123 23:06:04.093171 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.093429 kubelet[2643]: E1123 23:06:04.093395 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.094453 kubelet[2643]: E1123 23:06:04.094420 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.094453 kubelet[2643]: W1123 23:06:04.094455 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.094604 kubelet[2643]: E1123 23:06:04.094514 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.094768 kubelet[2643]: E1123 23:06:04.094735 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.094801 kubelet[2643]: W1123 23:06:04.094773 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.094844 kubelet[2643]: E1123 23:06:04.094817 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.094966 kubelet[2643]: E1123 23:06:04.094951 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.094995 kubelet[2643]: W1123 23:06:04.094964 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.094995 kubelet[2643]: E1123 23:06:04.094978 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.096638 kubelet[2643]: E1123 23:06:04.096602 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.096638 kubelet[2643]: W1123 23:06:04.096630 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.096763 kubelet[2643]: E1123 23:06:04.096653 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.100845 kubelet[2643]: E1123 23:06:04.100806 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.100845 kubelet[2643]: W1123 23:06:04.100833 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.101022 kubelet[2643]: E1123 23:06:04.100859 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:04.102453 kubelet[2643]: E1123 23:06:04.102413 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:04.102453 kubelet[2643]: W1123 23:06:04.102443 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:04.102679 kubelet[2643]: E1123 23:06:04.102653 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.000080 containerd[1501]: time="2025-11-23T23:06:04.999493471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:06:05.000544 containerd[1501]: time="2025-11-23T23:06:05.000244459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 23 23:06:05.001248 containerd[1501]: time="2025-11-23T23:06:05.001212805Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:06:05.003455 containerd[1501]: time="2025-11-23T23:06:05.003373124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:06:05.004027 containerd[1501]: time="2025-11-23T23:06:05.004003601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.288935469s" Nov 23 23:06:05.004070 containerd[1501]: time="2025-11-23T23:06:05.004057171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 23:06:05.007179 containerd[1501]: time="2025-11-23T23:06:05.007118257Z" level=info msg="CreateContainer within sandbox \"a64c5ab01df4099eb5d5e3ddd2470ecc4b2a0a65357339537b6aa36fefbfdfac\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 23:06:05.036658 containerd[1501]: time="2025-11-23T23:06:05.036608154Z" level=info msg="Container 8f8a196a6d1bc39c4fc88b690e0eb2abd8dc440f485d88d8721850f9c5a50961: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:06:05.038147 kubelet[2643]: I1123 23:06:05.038071 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:06:05.038161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1888588714.mount: Deactivated successfully. Nov 23 23:06:05.047897 containerd[1501]: time="2025-11-23T23:06:05.047788343Z" level=info msg="CreateContainer within sandbox \"a64c5ab01df4099eb5d5e3ddd2470ecc4b2a0a65357339537b6aa36fefbfdfac\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8f8a196a6d1bc39c4fc88b690e0eb2abd8dc440f485d88d8721850f9c5a50961\"" Nov 23 23:06:05.048814 containerd[1501]: time="2025-11-23T23:06:05.048698511Z" level=info msg="StartContainer for \"8f8a196a6d1bc39c4fc88b690e0eb2abd8dc440f485d88d8721850f9c5a50961\"" Nov 23 23:06:05.052085 containerd[1501]: time="2025-11-23T23:06:05.052029167Z" level=info msg="connecting to shim 8f8a196a6d1bc39c4fc88b690e0eb2abd8dc440f485d88d8721850f9c5a50961" address="unix:///run/containerd/s/9d2d41c876883e78715339d359d0b2232a0de5c1b1cb6ef8675d76aafd5b7aba" protocol=ttrpc version=3 Nov 23 23:06:05.078987 systemd[1]: Started cri-containerd-8f8a196a6d1bc39c4fc88b690e0eb2abd8dc440f485d88d8721850f9c5a50961.scope - libcontainer container 8f8a196a6d1bc39c4fc88b690e0eb2abd8dc440f485d88d8721850f9c5a50961. Nov 23 23:06:05.093508 kubelet[2643]: E1123 23:06:05.093477 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.093687 kubelet[2643]: W1123 23:06:05.093670 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.093768 kubelet[2643]: E1123 23:06:05.093738 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.094018 kubelet[2643]: E1123 23:06:05.094005 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.094112 kubelet[2643]: W1123 23:06:05.094099 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.094173 kubelet[2643]: E1123 23:06:05.094162 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.094398 kubelet[2643]: E1123 23:06:05.094386 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.094475 kubelet[2643]: W1123 23:06:05.094464 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.094531 kubelet[2643]: E1123 23:06:05.094521 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.094760 kubelet[2643]: E1123 23:06:05.094739 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.094925 kubelet[2643]: W1123 23:06:05.094841 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.094925 kubelet[2643]: E1123 23:06:05.094860 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.095343 kubelet[2643]: E1123 23:06:05.095268 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.095343 kubelet[2643]: W1123 23:06:05.095280 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.095343 kubelet[2643]: E1123 23:06:05.095291 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.095666 kubelet[2643]: E1123 23:06:05.095604 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.095666 kubelet[2643]: W1123 23:06:05.095617 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.095666 kubelet[2643]: E1123 23:06:05.095627 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.096027 kubelet[2643]: E1123 23:06:05.095962 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.096027 kubelet[2643]: W1123 23:06:05.095975 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.096027 kubelet[2643]: E1123 23:06:05.095987 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.096272 kubelet[2643]: E1123 23:06:05.096261 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.096364 kubelet[2643]: W1123 23:06:05.096350 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.096429 kubelet[2643]: E1123 23:06:05.096419 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.096737 kubelet[2643]: E1123 23:06:05.096724 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.096888 kubelet[2643]: W1123 23:06:05.096833 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.096888 kubelet[2643]: E1123 23:06:05.096851 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.097087 kubelet[2643]: E1123 23:06:05.097074 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.097266 kubelet[2643]: W1123 23:06:05.097166 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.097266 kubelet[2643]: E1123 23:06:05.097182 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.097423 kubelet[2643]: E1123 23:06:05.097411 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.097517 kubelet[2643]: W1123 23:06:05.097463 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.097517 kubelet[2643]: E1123 23:06:05.097476 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.097761 kubelet[2643]: E1123 23:06:05.097720 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.097761 kubelet[2643]: W1123 23:06:05.097732 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.097761 kubelet[2643]: E1123 23:06:05.097742 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.098145 kubelet[2643]: E1123 23:06:05.098131 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.098278 kubelet[2643]: W1123 23:06:05.098214 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.098278 kubelet[2643]: E1123 23:06:05.098230 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.098528 kubelet[2643]: E1123 23:06:05.098517 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.098696 kubelet[2643]: W1123 23:06:05.098592 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.098696 kubelet[2643]: E1123 23:06:05.098609 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.098884 kubelet[2643]: E1123 23:06:05.098871 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.098956 kubelet[2643]: W1123 23:06:05.098945 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.099004 kubelet[2643]: E1123 23:06:05.098995 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.099343 kubelet[2643]: E1123 23:06:05.099331 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.099450 kubelet[2643]: W1123 23:06:05.099416 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.099450 kubelet[2643]: E1123 23:06:05.099433 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.099806 kubelet[2643]: E1123 23:06:05.099777 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.099806 kubelet[2643]: W1123 23:06:05.099792 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.100056 kubelet[2643]: E1123 23:06:05.099901 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.100337 kubelet[2643]: E1123 23:06:05.100323 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.100581 kubelet[2643]: W1123 23:06:05.100408 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.100581 kubelet[2643]: E1123 23:06:05.100428 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.100716 kubelet[2643]: E1123 23:06:05.100701 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.100801 kubelet[2643]: W1123 23:06:05.100789 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.100874 kubelet[2643]: E1123 23:06:05.100864 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.101114 kubelet[2643]: E1123 23:06:05.101099 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.101312 kubelet[2643]: W1123 23:06:05.101187 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.101312 kubelet[2643]: E1123 23:06:05.101208 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.101444 kubelet[2643]: E1123 23:06:05.101432 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.101502 kubelet[2643]: W1123 23:06:05.101492 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.101644 kubelet[2643]: E1123 23:06:05.101602 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.101789 kubelet[2643]: E1123 23:06:05.101777 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.101900 kubelet[2643]: W1123 23:06:05.101852 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.101936 kubelet[2643]: E1123 23:06:05.101898 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.102164 kubelet[2643]: E1123 23:06:05.102139 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.102164 kubelet[2643]: W1123 23:06:05.102150 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.102314 kubelet[2643]: E1123 23:06:05.102253 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.102529 kubelet[2643]: E1123 23:06:05.102517 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.102665 kubelet[2643]: W1123 23:06:05.102651 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.102741 kubelet[2643]: E1123 23:06:05.102730 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.103043 kubelet[2643]: E1123 23:06:05.103029 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.103101 kubelet[2643]: W1123 23:06:05.103090 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.103160 kubelet[2643]: E1123 23:06:05.103151 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.103438 kubelet[2643]: E1123 23:06:05.103424 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.103521 kubelet[2643]: W1123 23:06:05.103507 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.103612 kubelet[2643]: E1123 23:06:05.103591 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.103852 kubelet[2643]: E1123 23:06:05.103838 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.103946 kubelet[2643]: W1123 23:06:05.103906 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.103993 kubelet[2643]: E1123 23:06:05.103944 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.104241 kubelet[2643]: E1123 23:06:05.104214 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.104241 kubelet[2643]: W1123 23:06:05.104228 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.104408 kubelet[2643]: E1123 23:06:05.104355 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.105060 kubelet[2643]: E1123 23:06:05.104916 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.105060 kubelet[2643]: W1123 23:06:05.104931 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.105060 kubelet[2643]: E1123 23:06:05.104947 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.105306 kubelet[2643]: E1123 23:06:05.105278 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.105379 kubelet[2643]: W1123 23:06:05.105365 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.105567 kubelet[2643]: E1123 23:06:05.105429 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.105694 kubelet[2643]: E1123 23:06:05.105680 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.105770 kubelet[2643]: W1123 23:06:05.105743 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.105856 kubelet[2643]: E1123 23:06:05.105836 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.106142 kubelet[2643]: E1123 23:06:05.106103 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.106142 kubelet[2643]: W1123 23:06:05.106135 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.106340 kubelet[2643]: E1123 23:06:05.106159 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.106403 kubelet[2643]: E1123 23:06:05.106385 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:06:05.106403 kubelet[2643]: W1123 23:06:05.106400 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:06:05.106455 kubelet[2643]: E1123 23:06:05.106411 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:06:05.152722 containerd[1501]: time="2025-11-23T23:06:05.152678351Z" level=info msg="StartContainer for \"8f8a196a6d1bc39c4fc88b690e0eb2abd8dc440f485d88d8721850f9c5a50961\" returns successfully" Nov 23 23:06:05.168001 systemd[1]: cri-containerd-8f8a196a6d1bc39c4fc88b690e0eb2abd8dc440f485d88d8721850f9c5a50961.scope: Deactivated successfully. Nov 23 23:06:05.185054 containerd[1501]: time="2025-11-23T23:06:05.184996090Z" level=info msg="received container exit event container_id:\"8f8a196a6d1bc39c4fc88b690e0eb2abd8dc440f485d88d8721850f9c5a50961\" id:\"8f8a196a6d1bc39c4fc88b690e0eb2abd8dc440f485d88d8721850f9c5a50961\" pid:3351 exited_at:{seconds:1763939165 nanos:179349126}" Nov 23 23:06:05.249606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f8a196a6d1bc39c4fc88b690e0eb2abd8dc440f485d88d8721850f9c5a50961-rootfs.mount: Deactivated successfully. Nov 23 23:06:05.935572 kubelet[2643]: E1123 23:06:05.935502 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2c6t" podUID="727c5b39-08b3-46f8-a2e4-48219c6016f9" Nov 23 23:06:06.043377 containerd[1501]: time="2025-11-23T23:06:06.043338584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 23:06:07.935786 kubelet[2643]: E1123 23:06:07.935678 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2c6t" podUID="727c5b39-08b3-46f8-a2e4-48219c6016f9" Nov 23 23:06:09.935355 kubelet[2643]: E1123 23:06:09.935300 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-n2c6t" podUID="727c5b39-08b3-46f8-a2e4-48219c6016f9" Nov 23 23:06:10.243109 containerd[1501]: time="2025-11-23T23:06:10.242454222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:06:10.243109 containerd[1501]: time="2025-11-23T23:06:10.242965091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 23:06:10.244303 containerd[1501]: time="2025-11-23T23:06:10.244240622Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:06:10.246730 containerd[1501]: time="2025-11-23T23:06:10.246685389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:06:10.247481 containerd[1501]: time="2025-11-23T23:06:10.247350918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 4.203970607s" Nov 23 23:06:10.247481 containerd[1501]: time="2025-11-23T23:06:10.247387723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 23:06:10.257319 containerd[1501]: time="2025-11-23T23:06:10.257271888Z" level=info msg="CreateContainer within sandbox \"a64c5ab01df4099eb5d5e3ddd2470ecc4b2a0a65357339537b6aa36fefbfdfac\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 23:06:10.271974 containerd[1501]: time="2025-11-23T23:06:10.271910410Z" level=info msg="Container 4f2f20f74b5b176fc4d76327a48e9e5809b2ef559d45bb103235a027fc6ed3d7: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:06:10.283939 containerd[1501]: time="2025-11-23T23:06:10.283881294Z" level=info msg="CreateContainer within sandbox \"a64c5ab01df4099eb5d5e3ddd2470ecc4b2a0a65357339537b6aa36fefbfdfac\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4f2f20f74b5b176fc4d76327a48e9e5809b2ef559d45bb103235a027fc6ed3d7\"" Nov 23 23:06:10.284516 containerd[1501]: time="2025-11-23T23:06:10.284484575Z" level=info msg="StartContainer for \"4f2f20f74b5b176fc4d76327a48e9e5809b2ef559d45bb103235a027fc6ed3d7\"" Nov 23 23:06:10.286433 containerd[1501]: time="2025-11-23T23:06:10.286394631Z" level=info msg="connecting to shim 4f2f20f74b5b176fc4d76327a48e9e5809b2ef559d45bb103235a027fc6ed3d7" address="unix:///run/containerd/s/9d2d41c876883e78715339d359d0b2232a0de5c1b1cb6ef8675d76aafd5b7aba" protocol=ttrpc version=3 Nov 23 23:06:10.307017 systemd[1]: Started cri-containerd-4f2f20f74b5b176fc4d76327a48e9e5809b2ef559d45bb103235a027fc6ed3d7.scope - libcontainer container 4f2f20f74b5b176fc4d76327a48e9e5809b2ef559d45bb103235a027fc6ed3d7. Nov 23 23:06:10.408242 containerd[1501]: time="2025-11-23T23:06:10.408200273Z" level=info msg="StartContainer for \"4f2f20f74b5b176fc4d76327a48e9e5809b2ef559d45bb103235a027fc6ed3d7\" returns successfully" Nov 23 23:06:11.146368 systemd[1]: cri-containerd-4f2f20f74b5b176fc4d76327a48e9e5809b2ef559d45bb103235a027fc6ed3d7.scope: Deactivated successfully. Nov 23 23:06:11.146813 systemd[1]: cri-containerd-4f2f20f74b5b176fc4d76327a48e9e5809b2ef559d45bb103235a027fc6ed3d7.scope: Consumed 545ms CPU time, 171.2M memory peak, 1.8M read from disk, 165.9M written to disk. Nov 23 23:06:11.148777 containerd[1501]: time="2025-11-23T23:06:11.148505922Z" level=info msg="received container exit event container_id:\"4f2f20f74b5b176fc4d76327a48e9e5809b2ef559d45bb103235a027fc6ed3d7\" id:\"4f2f20f74b5b176fc4d76327a48e9e5809b2ef559d45bb103235a027fc6ed3d7\" pid:3442 exited_at:{seconds:1763939171 nanos:148225246}" Nov 23 23:06:11.179110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f2f20f74b5b176fc4d76327a48e9e5809b2ef559d45bb103235a027fc6ed3d7-rootfs.mount: Deactivated successfully. Nov 23 23:06:11.181937 kubelet[2643]: I1123 23:06:11.181674 2643 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 23 23:06:11.284714 systemd[1]: Created slice kubepods-burstable-podcd12f407_ade9_47ae_874f_ddb6030dd593.slice - libcontainer container kubepods-burstable-podcd12f407_ade9_47ae_874f_ddb6030dd593.slice. Nov 23 23:06:11.302097 systemd[1]: Created slice kubepods-besteffort-pod954fb416_d77f_4b69_ac56_30f7ed8932d5.slice - libcontainer container kubepods-besteffort-pod954fb416_d77f_4b69_ac56_30f7ed8932d5.slice. Nov 23 23:06:11.309342 systemd[1]: Created slice kubepods-besteffort-pod119ea192_2966_40b8_aace_9d8b5df61791.slice - libcontainer container kubepods-besteffort-pod119ea192_2966_40b8_aace_9d8b5df61791.slice. Nov 23 23:06:11.320715 systemd[1]: Created slice kubepods-burstable-pod9387e88e_25d6_44f5_a40f_c5700146aac3.slice - libcontainer container kubepods-burstable-pod9387e88e_25d6_44f5_a40f_c5700146aac3.slice. Nov 23 23:06:11.326147 systemd[1]: Created slice kubepods-besteffort-podacc2e189_3c10_4cac_9d7e_5131c7f8c476.slice - libcontainer container kubepods-besteffort-podacc2e189_3c10_4cac_9d7e_5131c7f8c476.slice. Nov 23 23:06:11.334047 systemd[1]: Created slice kubepods-besteffort-pod07c94371_dfc1_4af4_842d_2b238e768583.slice - libcontainer container kubepods-besteffort-pod07c94371_dfc1_4af4_842d_2b238e768583.slice. Nov 23 23:06:11.340117 systemd[1]: Created slice kubepods-besteffort-poda72dd4c7_49a5_4da9_ae9d_582c3d4cbc16.slice - libcontainer container kubepods-besteffort-poda72dd4c7_49a5_4da9_ae9d_582c3d4cbc16.slice. Nov 23 23:06:11.347131 kubelet[2643]: I1123 23:06:11.347075 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16-goldmane-ca-bundle\") pod \"goldmane-666569f655-jfgkh\" (UID: \"a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16\") " pod="calico-system/goldmane-666569f655-jfgkh" Nov 23 23:06:11.347131 kubelet[2643]: I1123 23:06:11.347128 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16-goldmane-key-pair\") pod \"goldmane-666569f655-jfgkh\" (UID: \"a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16\") " pod="calico-system/goldmane-666569f655-jfgkh" Nov 23 23:06:11.347436 kubelet[2643]: I1123 23:06:11.347149 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/954fb416-d77f-4b69-ac56-30f7ed8932d5-calico-apiserver-certs\") pod \"calico-apiserver-69b55bbf5b-6sfqj\" (UID: \"954fb416-d77f-4b69-ac56-30f7ed8932d5\") " pod="calico-apiserver/calico-apiserver-69b55bbf5b-6sfqj" Nov 23 23:06:11.347436 kubelet[2643]: I1123 23:06:11.347172 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9387e88e-25d6-44f5-a40f-c5700146aac3-config-volume\") pod \"coredns-668d6bf9bc-w2tq7\" (UID: \"9387e88e-25d6-44f5-a40f-c5700146aac3\") " pod="kube-system/coredns-668d6bf9bc-w2tq7" Nov 23 23:06:11.347436 kubelet[2643]: I1123 23:06:11.347191 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bnv7\" (UniqueName: \"kubernetes.io/projected/a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16-kube-api-access-9bnv7\") pod \"goldmane-666569f655-jfgkh\" (UID: \"a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16\") " pod="calico-system/goldmane-666569f655-jfgkh" Nov 23 23:06:11.347436 kubelet[2643]: I1123 23:06:11.347209 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07c94371-dfc1-4af4-842d-2b238e768583-whisker-backend-key-pair\") pod \"whisker-d749485dd-x9759\" (UID: \"07c94371-dfc1-4af4-842d-2b238e768583\") " pod="calico-system/whisker-d749485dd-x9759" Nov 23 23:06:11.347436 kubelet[2643]: I1123 23:06:11.347230 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mc54\" (UniqueName: \"kubernetes.io/projected/9387e88e-25d6-44f5-a40f-c5700146aac3-kube-api-access-7mc54\") pod \"coredns-668d6bf9bc-w2tq7\" (UID: \"9387e88e-25d6-44f5-a40f-c5700146aac3\") " pod="kube-system/coredns-668d6bf9bc-w2tq7" Nov 23 23:06:11.347573 kubelet[2643]: I1123 23:06:11.347254 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhgc7\" (UniqueName: \"kubernetes.io/projected/119ea192-2966-40b8-aace-9d8b5df61791-kube-api-access-lhgc7\") pod \"calico-apiserver-69b55bbf5b-64w8f\" (UID: \"119ea192-2966-40b8-aace-9d8b5df61791\") " pod="calico-apiserver/calico-apiserver-69b55bbf5b-64w8f" Nov 23 23:06:11.347573 kubelet[2643]: I1123 23:06:11.347272 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd12f407-ade9-47ae-874f-ddb6030dd593-config-volume\") pod \"coredns-668d6bf9bc-l2lxn\" (UID: \"cd12f407-ade9-47ae-874f-ddb6030dd593\") " pod="kube-system/coredns-668d6bf9bc-l2lxn" Nov 23 23:06:11.347573 kubelet[2643]: I1123 23:06:11.347292 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07c94371-dfc1-4af4-842d-2b238e768583-whisker-ca-bundle\") pod \"whisker-d749485dd-x9759\" (UID: \"07c94371-dfc1-4af4-842d-2b238e768583\") " pod="calico-system/whisker-d749485dd-x9759" Nov 23 23:06:11.347573 kubelet[2643]: I1123 23:06:11.347313 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16-config\") pod \"goldmane-666569f655-jfgkh\" (UID: \"a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16\") " pod="calico-system/goldmane-666569f655-jfgkh" Nov 23 23:06:11.347573 kubelet[2643]: I1123 23:06:11.347333 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acc2e189-3c10-4cac-9d7e-5131c7f8c476-tigera-ca-bundle\") pod \"calico-kube-controllers-77f5b99886-phxmt\" (UID: \"acc2e189-3c10-4cac-9d7e-5131c7f8c476\") " pod="calico-system/calico-kube-controllers-77f5b99886-phxmt" Nov 23 23:06:11.347693 kubelet[2643]: I1123 23:06:11.347349 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4987\" (UniqueName: \"kubernetes.io/projected/acc2e189-3c10-4cac-9d7e-5131c7f8c476-kube-api-access-f4987\") pod \"calico-kube-controllers-77f5b99886-phxmt\" (UID: \"acc2e189-3c10-4cac-9d7e-5131c7f8c476\") " pod="calico-system/calico-kube-controllers-77f5b99886-phxmt" Nov 23 23:06:11.347693 kubelet[2643]: I1123 23:06:11.347369 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/119ea192-2966-40b8-aace-9d8b5df61791-calico-apiserver-certs\") pod \"calico-apiserver-69b55bbf5b-64w8f\" (UID: \"119ea192-2966-40b8-aace-9d8b5df61791\") " pod="calico-apiserver/calico-apiserver-69b55bbf5b-64w8f" Nov 23 23:06:11.347693 kubelet[2643]: I1123 23:06:11.347388 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8h58\" (UniqueName: \"kubernetes.io/projected/cd12f407-ade9-47ae-874f-ddb6030dd593-kube-api-access-f8h58\") pod \"coredns-668d6bf9bc-l2lxn\" (UID: \"cd12f407-ade9-47ae-874f-ddb6030dd593\") " pod="kube-system/coredns-668d6bf9bc-l2lxn" Nov 23 23:06:11.347693 kubelet[2643]: I1123 23:06:11.347406 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnqf5\" (UniqueName: \"kubernetes.io/projected/954fb416-d77f-4b69-ac56-30f7ed8932d5-kube-api-access-jnqf5\") pod \"calico-apiserver-69b55bbf5b-6sfqj\" (UID: \"954fb416-d77f-4b69-ac56-30f7ed8932d5\") " pod="calico-apiserver/calico-apiserver-69b55bbf5b-6sfqj" Nov 23 23:06:11.347693 kubelet[2643]: I1123 23:06:11.347425 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55qjm\" (UniqueName: \"kubernetes.io/projected/07c94371-dfc1-4af4-842d-2b238e768583-kube-api-access-55qjm\") pod \"whisker-d749485dd-x9759\" (UID: \"07c94371-dfc1-4af4-842d-2b238e768583\") " pod="calico-system/whisker-d749485dd-x9759" Nov 23 23:06:11.593012 containerd[1501]: time="2025-11-23T23:06:11.592911833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l2lxn,Uid:cd12f407-ade9-47ae-874f-ddb6030dd593,Namespace:kube-system,Attempt:0,}" Nov 23 23:06:11.607856 containerd[1501]: time="2025-11-23T23:06:11.607774061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b55bbf5b-6sfqj,Uid:954fb416-d77f-4b69-ac56-30f7ed8932d5,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:06:11.617113 containerd[1501]: time="2025-11-23T23:06:11.617066468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b55bbf5b-64w8f,Uid:119ea192-2966-40b8-aace-9d8b5df61791,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:06:11.626564 containerd[1501]: time="2025-11-23T23:06:11.626525736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w2tq7,Uid:9387e88e-25d6-44f5-a40f-c5700146aac3,Namespace:kube-system,Attempt:0,}" Nov 23 23:06:11.630509 containerd[1501]: time="2025-11-23T23:06:11.630438308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f5b99886-phxmt,Uid:acc2e189-3c10-4cac-9d7e-5131c7f8c476,Namespace:calico-system,Attempt:0,}" Nov 23 23:06:11.638903 containerd[1501]: time="2025-11-23T23:06:11.638851565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d749485dd-x9759,Uid:07c94371-dfc1-4af4-842d-2b238e768583,Namespace:calico-system,Attempt:0,}" Nov 23 23:06:11.645224 containerd[1501]: time="2025-11-23T23:06:11.643193830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jfgkh,Uid:a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16,Namespace:calico-system,Attempt:0,}" Nov 23 23:06:11.733161 containerd[1501]: time="2025-11-23T23:06:11.733106886Z" level=error msg="Failed to destroy network for sandbox \"b02c5b3989d95b388d9c5e85cbb3412c34013d54a5ba638a9c0782f5e0b06938\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.738621 containerd[1501]: time="2025-11-23T23:06:11.738549210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l2lxn,Uid:cd12f407-ade9-47ae-874f-ddb6030dd593,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02c5b3989d95b388d9c5e85cbb3412c34013d54a5ba638a9c0782f5e0b06938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.744645 kubelet[2643]: E1123 23:06:11.742438 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02c5b3989d95b388d9c5e85cbb3412c34013d54a5ba638a9c0782f5e0b06938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.744839 kubelet[2643]: E1123 23:06:11.744686 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02c5b3989d95b388d9c5e85cbb3412c34013d54a5ba638a9c0782f5e0b06938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l2lxn" Nov 23 23:06:11.752377 kubelet[2643]: E1123 23:06:11.752158 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b02c5b3989d95b388d9c5e85cbb3412c34013d54a5ba638a9c0782f5e0b06938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l2lxn" Nov 23 23:06:11.752377 kubelet[2643]: E1123 23:06:11.752302 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-l2lxn_kube-system(cd12f407-ade9-47ae-874f-ddb6030dd593)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-l2lxn_kube-system(cd12f407-ade9-47ae-874f-ddb6030dd593)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b02c5b3989d95b388d9c5e85cbb3412c34013d54a5ba638a9c0782f5e0b06938\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-l2lxn" podUID="cd12f407-ade9-47ae-874f-ddb6030dd593" Nov 23 23:06:11.765445 containerd[1501]: time="2025-11-23T23:06:11.765388462Z" level=error msg="Failed to destroy network for sandbox \"ac22b7aded5b1dd944fecc9d0ca8a7b1287a5e556472e7e07f8a488402ef51d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.766431 containerd[1501]: time="2025-11-23T23:06:11.766346062Z" level=error msg="Failed to destroy network for sandbox \"4bcfebdc4d39abb696b0db1fa19e721156f3e3357f1dad9f1a6913732ff75402\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.768377 containerd[1501]: time="2025-11-23T23:06:11.768223858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b55bbf5b-64w8f,Uid:119ea192-2966-40b8-aace-9d8b5df61791,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac22b7aded5b1dd944fecc9d0ca8a7b1287a5e556472e7e07f8a488402ef51d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.769191 containerd[1501]: time="2025-11-23T23:06:11.769144574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b55bbf5b-6sfqj,Uid:954fb416-d77f-4b69-ac56-30f7ed8932d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bcfebdc4d39abb696b0db1fa19e721156f3e3357f1dad9f1a6913732ff75402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.769837 kubelet[2643]: E1123 23:06:11.769700 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bcfebdc4d39abb696b0db1fa19e721156f3e3357f1dad9f1a6913732ff75402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.769837 kubelet[2643]: E1123 23:06:11.769812 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bcfebdc4d39abb696b0db1fa19e721156f3e3357f1dad9f1a6913732ff75402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b55bbf5b-6sfqj" Nov 23 23:06:11.769837 kubelet[2643]: E1123 23:06:11.769700 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac22b7aded5b1dd944fecc9d0ca8a7b1287a5e556472e7e07f8a488402ef51d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.770029 kubelet[2643]: E1123 23:06:11.769871 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac22b7aded5b1dd944fecc9d0ca8a7b1287a5e556472e7e07f8a488402ef51d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b55bbf5b-64w8f" Nov 23 23:06:11.770029 kubelet[2643]: E1123 23:06:11.769898 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac22b7aded5b1dd944fecc9d0ca8a7b1287a5e556472e7e07f8a488402ef51d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b55bbf5b-64w8f" Nov 23 23:06:11.770029 kubelet[2643]: E1123 23:06:11.769833 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bcfebdc4d39abb696b0db1fa19e721156f3e3357f1dad9f1a6913732ff75402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b55bbf5b-6sfqj" Nov 23 23:06:11.770118 kubelet[2643]: E1123 23:06:11.769944 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b55bbf5b-64w8f_calico-apiserver(119ea192-2966-40b8-aace-9d8b5df61791)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b55bbf5b-64w8f_calico-apiserver(119ea192-2966-40b8-aace-9d8b5df61791)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac22b7aded5b1dd944fecc9d0ca8a7b1287a5e556472e7e07f8a488402ef51d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-64w8f" podUID="119ea192-2966-40b8-aace-9d8b5df61791" Nov 23 23:06:11.770118 kubelet[2643]: E1123 23:06:11.769997 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b55bbf5b-6sfqj_calico-apiserver(954fb416-d77f-4b69-ac56-30f7ed8932d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b55bbf5b-6sfqj_calico-apiserver(954fb416-d77f-4b69-ac56-30f7ed8932d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bcfebdc4d39abb696b0db1fa19e721156f3e3357f1dad9f1a6913732ff75402\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-6sfqj" podUID="954fb416-d77f-4b69-ac56-30f7ed8932d5" Nov 23 23:06:11.770439 containerd[1501]: time="2025-11-23T23:06:11.770381409Z" level=error msg="Failed to destroy network for sandbox \"26496b5e7db34e5c749cc511096ed79858ad003948d96f9afd4319dbc6cdc24a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.773401 containerd[1501]: time="2025-11-23T23:06:11.773215125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f5b99886-phxmt,Uid:acc2e189-3c10-4cac-9d7e-5131c7f8c476,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26496b5e7db34e5c749cc511096ed79858ad003948d96f9afd4319dbc6cdc24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.773982 kubelet[2643]: E1123 23:06:11.773688 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26496b5e7db34e5c749cc511096ed79858ad003948d96f9afd4319dbc6cdc24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.773982 kubelet[2643]: E1123 23:06:11.773837 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26496b5e7db34e5c749cc511096ed79858ad003948d96f9afd4319dbc6cdc24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77f5b99886-phxmt" Nov 23 23:06:11.773982 kubelet[2643]: E1123 23:06:11.773866 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26496b5e7db34e5c749cc511096ed79858ad003948d96f9afd4319dbc6cdc24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77f5b99886-phxmt" Nov 23 23:06:11.774225 kubelet[2643]: E1123 23:06:11.773909 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77f5b99886-phxmt_calico-system(acc2e189-3c10-4cac-9d7e-5131c7f8c476)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77f5b99886-phxmt_calico-system(acc2e189-3c10-4cac-9d7e-5131c7f8c476)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26496b5e7db34e5c749cc511096ed79858ad003948d96f9afd4319dbc6cdc24a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77f5b99886-phxmt" podUID="acc2e189-3c10-4cac-9d7e-5131c7f8c476" Nov 23 23:06:11.777826 containerd[1501]: time="2025-11-23T23:06:11.777682407Z" level=error msg="Failed to destroy network for sandbox \"9661894e296238322dd93eff5055b7f771223f5a5d1967397741d1390e06b792\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.779641 containerd[1501]: time="2025-11-23T23:06:11.779589166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d749485dd-x9759,Uid:07c94371-dfc1-4af4-842d-2b238e768583,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9661894e296238322dd93eff5055b7f771223f5a5d1967397741d1390e06b792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.779993 kubelet[2643]: E1123 23:06:11.779919 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9661894e296238322dd93eff5055b7f771223f5a5d1967397741d1390e06b792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.780334 kubelet[2643]: E1123 23:06:11.780032 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9661894e296238322dd93eff5055b7f771223f5a5d1967397741d1390e06b792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d749485dd-x9759" Nov 23 23:06:11.780334 kubelet[2643]: E1123 23:06:11.780089 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9661894e296238322dd93eff5055b7f771223f5a5d1967397741d1390e06b792\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d749485dd-x9759" Nov 23 23:06:11.780334 kubelet[2643]: E1123 23:06:11.780139 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d749485dd-x9759_calico-system(07c94371-dfc1-4af4-842d-2b238e768583)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d749485dd-x9759_calico-system(07c94371-dfc1-4af4-842d-2b238e768583)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9661894e296238322dd93eff5055b7f771223f5a5d1967397741d1390e06b792\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d749485dd-x9759" podUID="07c94371-dfc1-4af4-842d-2b238e768583" Nov 23 23:06:11.784185 containerd[1501]: time="2025-11-23T23:06:11.784066169Z" level=error msg="Failed to destroy network for sandbox \"16c0fc171bdfbaf3e468d797e0091dd342e61f770260a9f6bc68b51969691fd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.785949 containerd[1501]: time="2025-11-23T23:06:11.785899799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jfgkh,Uid:a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c0fc171bdfbaf3e468d797e0091dd342e61f770260a9f6bc68b51969691fd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.786939 kubelet[2643]: E1123 23:06:11.786888 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c0fc171bdfbaf3e468d797e0091dd342e61f770260a9f6bc68b51969691fd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.787764 kubelet[2643]: E1123 23:06:11.786960 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c0fc171bdfbaf3e468d797e0091dd342e61f770260a9f6bc68b51969691fd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jfgkh" Nov 23 23:06:11.787764 kubelet[2643]: E1123 23:06:11.786982 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c0fc171bdfbaf3e468d797e0091dd342e61f770260a9f6bc68b51969691fd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jfgkh" Nov 23 23:06:11.787764 kubelet[2643]: E1123 23:06:11.787020 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-jfgkh_calico-system(a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-jfgkh_calico-system(a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16c0fc171bdfbaf3e468d797e0091dd342e61f770260a9f6bc68b51969691fd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-jfgkh" podUID="a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16" Nov 23 23:06:11.789141 containerd[1501]: time="2025-11-23T23:06:11.789006949Z" level=error msg="Failed to destroy network for sandbox \"0ed86b836615cd64ca374c03311d6d3b4da799e96047751c1dd986ea83b32c44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.790999 containerd[1501]: time="2025-11-23T23:06:11.790944913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w2tq7,Uid:9387e88e-25d6-44f5-a40f-c5700146aac3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ed86b836615cd64ca374c03311d6d3b4da799e96047751c1dd986ea83b32c44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.791782 kubelet[2643]: E1123 23:06:11.791461 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ed86b836615cd64ca374c03311d6d3b4da799e96047751c1dd986ea83b32c44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:11.791782 kubelet[2643]: E1123 23:06:11.791531 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ed86b836615cd64ca374c03311d6d3b4da799e96047751c1dd986ea83b32c44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w2tq7" Nov 23 23:06:11.791782 kubelet[2643]: E1123 23:06:11.791550 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ed86b836615cd64ca374c03311d6d3b4da799e96047751c1dd986ea83b32c44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-w2tq7" Nov 23 23:06:11.792050 kubelet[2643]: E1123 23:06:11.791595 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-w2tq7_kube-system(9387e88e-25d6-44f5-a40f-c5700146aac3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-w2tq7_kube-system(9387e88e-25d6-44f5-a40f-c5700146aac3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ed86b836615cd64ca374c03311d6d3b4da799e96047751c1dd986ea83b32c44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-w2tq7" podUID="9387e88e-25d6-44f5-a40f-c5700146aac3" Nov 23 23:06:11.945446 systemd[1]: Created slice kubepods-besteffort-pod727c5b39_08b3_46f8_a2e4_48219c6016f9.slice - libcontainer container kubepods-besteffort-pod727c5b39_08b3_46f8_a2e4_48219c6016f9.slice. Nov 23 23:06:11.948446 containerd[1501]: time="2025-11-23T23:06:11.948406735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2c6t,Uid:727c5b39-08b3-46f8-a2e4-48219c6016f9,Namespace:calico-system,Attempt:0,}" Nov 23 23:06:11.999914 containerd[1501]: time="2025-11-23T23:06:11.999861119Z" level=error msg="Failed to destroy network for sandbox \"a18f0cb25e2d724cf6bcdbba354197f1303f98fae5fb0c7a9d32178895be8d53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:12.001562 containerd[1501]: time="2025-11-23T23:06:12.001469476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2c6t,Uid:727c5b39-08b3-46f8-a2e4-48219c6016f9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a18f0cb25e2d724cf6bcdbba354197f1303f98fae5fb0c7a9d32178895be8d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:12.001811 kubelet[2643]: E1123 23:06:12.001770 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a18f0cb25e2d724cf6bcdbba354197f1303f98fae5fb0c7a9d32178895be8d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:06:12.001880 kubelet[2643]: E1123 23:06:12.001837 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a18f0cb25e2d724cf6bcdbba354197f1303f98fae5fb0c7a9d32178895be8d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n2c6t" Nov 23 23:06:12.001880 kubelet[2643]: E1123 23:06:12.001857 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a18f0cb25e2d724cf6bcdbba354197f1303f98fae5fb0c7a9d32178895be8d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-n2c6t" Nov 23 23:06:12.001946 kubelet[2643]: E1123 23:06:12.001905 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-n2c6t_calico-system(727c5b39-08b3-46f8-a2e4-48219c6016f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-n2c6t_calico-system(727c5b39-08b3-46f8-a2e4-48219c6016f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a18f0cb25e2d724cf6bcdbba354197f1303f98fae5fb0c7a9d32178895be8d53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-n2c6t" podUID="727c5b39-08b3-46f8-a2e4-48219c6016f9" Nov 23 23:06:12.065980 containerd[1501]: time="2025-11-23T23:06:12.065926308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 23:06:15.153396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2067067497.mount: Deactivated successfully. Nov 23 23:06:15.511789 containerd[1501]: time="2025-11-23T23:06:15.511494913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 23:06:15.512251 containerd[1501]: time="2025-11-23T23:06:15.512031753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:06:15.515272 containerd[1501]: time="2025-11-23T23:06:15.514489055Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:06:15.524052 containerd[1501]: time="2025-11-23T23:06:15.523992118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:06:15.525447 containerd[1501]: time="2025-11-23T23:06:15.525380061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.459399827s" Nov 23 23:06:15.525447 containerd[1501]: time="2025-11-23T23:06:15.525443586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 23:06:15.564112 containerd[1501]: time="2025-11-23T23:06:15.564047283Z" level=info msg="CreateContainer within sandbox \"a64c5ab01df4099eb5d5e3ddd2470ecc4b2a0a65357339537b6aa36fefbfdfac\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 23:06:15.609706 containerd[1501]: time="2025-11-23T23:06:15.609106138Z" level=info msg="Container 54fc5076a20d44760db0a252035d6b97a30c072c6c768252d4972e500bb81a3b: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:06:15.649954 containerd[1501]: time="2025-11-23T23:06:15.649871075Z" level=info msg="CreateContainer within sandbox \"a64c5ab01df4099eb5d5e3ddd2470ecc4b2a0a65357339537b6aa36fefbfdfac\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"54fc5076a20d44760db0a252035d6b97a30c072c6c768252d4972e500bb81a3b\"" Nov 23 23:06:15.650709 containerd[1501]: time="2025-11-23T23:06:15.650532524Z" level=info msg="StartContainer for \"54fc5076a20d44760db0a252035d6b97a30c072c6c768252d4972e500bb81a3b\"" Nov 23 23:06:15.653252 containerd[1501]: time="2025-11-23T23:06:15.653194881Z" level=info msg="connecting to shim 54fc5076a20d44760db0a252035d6b97a30c072c6c768252d4972e500bb81a3b" address="unix:///run/containerd/s/9d2d41c876883e78715339d359d0b2232a0de5c1b1cb6ef8675d76aafd5b7aba" protocol=ttrpc version=3 Nov 23 23:06:15.679287 systemd[1]: Started cri-containerd-54fc5076a20d44760db0a252035d6b97a30c072c6c768252d4972e500bb81a3b.scope - libcontainer container 54fc5076a20d44760db0a252035d6b97a30c072c6c768252d4972e500bb81a3b. Nov 23 23:06:15.790259 containerd[1501]: time="2025-11-23T23:06:15.790140137Z" level=info msg="StartContainer for \"54fc5076a20d44760db0a252035d6b97a30c072c6c768252d4972e500bb81a3b\" returns successfully" Nov 23 23:06:15.897580 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 23:06:15.897734 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 23:06:16.078854 kubelet[2643]: I1123 23:06:16.078161 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07c94371-dfc1-4af4-842d-2b238e768583-whisker-backend-key-pair\") pod \"07c94371-dfc1-4af4-842d-2b238e768583\" (UID: \"07c94371-dfc1-4af4-842d-2b238e768583\") " Nov 23 23:06:16.078854 kubelet[2643]: I1123 23:06:16.078414 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55qjm\" (UniqueName: \"kubernetes.io/projected/07c94371-dfc1-4af4-842d-2b238e768583-kube-api-access-55qjm\") pod \"07c94371-dfc1-4af4-842d-2b238e768583\" (UID: \"07c94371-dfc1-4af4-842d-2b238e768583\") " Nov 23 23:06:16.078854 kubelet[2643]: I1123 23:06:16.078440 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07c94371-dfc1-4af4-842d-2b238e768583-whisker-ca-bundle\") pod \"07c94371-dfc1-4af4-842d-2b238e768583\" (UID: \"07c94371-dfc1-4af4-842d-2b238e768583\") " Nov 23 23:06:16.089625 kubelet[2643]: I1123 23:06:16.089182 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c94371-dfc1-4af4-842d-2b238e768583-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "07c94371-dfc1-4af4-842d-2b238e768583" (UID: "07c94371-dfc1-4af4-842d-2b238e768583"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 23:06:16.094202 kubelet[2643]: I1123 23:06:16.094162 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c94371-dfc1-4af4-842d-2b238e768583-kube-api-access-55qjm" (OuterVolumeSpecName: "kube-api-access-55qjm") pod "07c94371-dfc1-4af4-842d-2b238e768583" (UID: "07c94371-dfc1-4af4-842d-2b238e768583"). InnerVolumeSpecName "kube-api-access-55qjm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 23:06:16.094615 kubelet[2643]: I1123 23:06:16.094566 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c94371-dfc1-4af4-842d-2b238e768583-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "07c94371-dfc1-4af4-842d-2b238e768583" (UID: "07c94371-dfc1-4af4-842d-2b238e768583"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 23:06:16.108383 kubelet[2643]: I1123 23:06:16.108309 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8n64q" podStartSLOduration=2.007227452 podStartE2EDuration="15.108291906s" podCreationTimestamp="2025-11-23 23:06:01 +0000 UTC" firstStartedPulling="2025-11-23 23:06:02.430876893 +0000 UTC m=+21.601143518" lastFinishedPulling="2025-11-23 23:06:15.531941347 +0000 UTC m=+34.702207972" observedRunningTime="2025-11-23 23:06:16.105933057 +0000 UTC m=+35.276199682" watchObservedRunningTime="2025-11-23 23:06:16.108291906 +0000 UTC m=+35.278558531" Nov 23 23:06:16.153363 systemd[1]: var-lib-kubelet-pods-07c94371\x2ddfc1\x2d4af4\x2d842d\x2d2b238e768583-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d55qjm.mount: Deactivated successfully. Nov 23 23:06:16.153478 systemd[1]: var-lib-kubelet-pods-07c94371\x2ddfc1\x2d4af4\x2d842d\x2d2b238e768583-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 23:06:16.179700 kubelet[2643]: I1123 23:06:16.179246 2643 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07c94371-dfc1-4af4-842d-2b238e768583-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 23 23:06:16.179700 kubelet[2643]: I1123 23:06:16.179288 2643 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-55qjm\" (UniqueName: \"kubernetes.io/projected/07c94371-dfc1-4af4-842d-2b238e768583-kube-api-access-55qjm\") on node \"localhost\" DevicePath \"\"" Nov 23 23:06:16.179700 kubelet[2643]: I1123 23:06:16.179298 2643 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07c94371-dfc1-4af4-842d-2b238e768583-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 23 23:06:16.391633 systemd[1]: Removed slice kubepods-besteffort-pod07c94371_dfc1_4af4_842d_2b238e768583.slice - libcontainer container kubepods-besteffort-pod07c94371_dfc1_4af4_842d_2b238e768583.slice. Nov 23 23:06:16.459298 systemd[1]: Created slice kubepods-besteffort-pod827591c9_2ca8_4c08_9a39_7dd17d688b03.slice - libcontainer container kubepods-besteffort-pod827591c9_2ca8_4c08_9a39_7dd17d688b03.slice. Nov 23 23:06:16.481632 kubelet[2643]: I1123 23:06:16.481579 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/827591c9-2ca8-4c08-9a39-7dd17d688b03-whisker-ca-bundle\") pod \"whisker-84c5fcd4d6-qdm8f\" (UID: \"827591c9-2ca8-4c08-9a39-7dd17d688b03\") " pod="calico-system/whisker-84c5fcd4d6-qdm8f" Nov 23 23:06:16.481632 kubelet[2643]: I1123 23:06:16.481633 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/827591c9-2ca8-4c08-9a39-7dd17d688b03-whisker-backend-key-pair\") pod \"whisker-84c5fcd4d6-qdm8f\" (UID: \"827591c9-2ca8-4c08-9a39-7dd17d688b03\") " pod="calico-system/whisker-84c5fcd4d6-qdm8f" Nov 23 23:06:16.481883 kubelet[2643]: I1123 23:06:16.481655 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cd9w\" (UniqueName: \"kubernetes.io/projected/827591c9-2ca8-4c08-9a39-7dd17d688b03-kube-api-access-4cd9w\") pod \"whisker-84c5fcd4d6-qdm8f\" (UID: \"827591c9-2ca8-4c08-9a39-7dd17d688b03\") " pod="calico-system/whisker-84c5fcd4d6-qdm8f" Nov 23 23:06:16.763033 containerd[1501]: time="2025-11-23T23:06:16.762992178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c5fcd4d6-qdm8f,Uid:827591c9-2ca8-4c08-9a39-7dd17d688b03,Namespace:calico-system,Attempt:0,}" Nov 23 23:06:16.939665 kubelet[2643]: I1123 23:06:16.939608 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c94371-dfc1-4af4-842d-2b238e768583" path="/var/lib/kubelet/pods/07c94371-dfc1-4af4-842d-2b238e768583/volumes" Nov 23 23:06:16.981983 systemd-networkd[1417]: caliad809713679: Link UP Nov 23 23:06:16.983504 systemd-networkd[1417]: caliad809713679: Gained carrier Nov 23 23:06:16.998522 containerd[1501]: 2025-11-23 23:06:16.788 [INFO][3843] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:06:16.998522 containerd[1501]: 2025-11-23 23:06:16.824 [INFO][3843] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--84c5fcd4d6--qdm8f-eth0 whisker-84c5fcd4d6- calico-system 827591c9-2ca8-4c08-9a39-7dd17d688b03 854 0 2025-11-23 23:06:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84c5fcd4d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-84c5fcd4d6-qdm8f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliad809713679 [] [] }} ContainerID="bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" Namespace="calico-system" Pod="whisker-84c5fcd4d6-qdm8f" WorkloadEndpoint="localhost-k8s-whisker--84c5fcd4d6--qdm8f-" Nov 23 23:06:16.998522 containerd[1501]: 2025-11-23 23:06:16.824 [INFO][3843] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" Namespace="calico-system" Pod="whisker-84c5fcd4d6-qdm8f" WorkloadEndpoint="localhost-k8s-whisker--84c5fcd4d6--qdm8f-eth0" Nov 23 23:06:16.998522 containerd[1501]: 2025-11-23 23:06:16.905 [INFO][3858] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" HandleID="k8s-pod-network.bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" Workload="localhost-k8s-whisker--84c5fcd4d6--qdm8f-eth0" Nov 23 23:06:16.998811 containerd[1501]: 2025-11-23 23:06:16.905 [INFO][3858] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" HandleID="k8s-pod-network.bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" Workload="localhost-k8s-whisker--84c5fcd4d6--qdm8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e6630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-84c5fcd4d6-qdm8f", "timestamp":"2025-11-23 23:06:16.905401346 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:06:16.998811 containerd[1501]: 2025-11-23 23:06:16.905 [INFO][3858] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:06:16.998811 containerd[1501]: 2025-11-23 23:06:16.906 [INFO][3858] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:06:16.998811 containerd[1501]: 2025-11-23 23:06:16.906 [INFO][3858] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:06:16.998811 containerd[1501]: 2025-11-23 23:06:16.919 [INFO][3858] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" host="localhost" Nov 23 23:06:16.998811 containerd[1501]: 2025-11-23 23:06:16.927 [INFO][3858] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:06:16.998811 containerd[1501]: 2025-11-23 23:06:16.942 [INFO][3858] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:06:16.998811 containerd[1501]: 2025-11-23 23:06:16.945 [INFO][3858] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:16.998811 containerd[1501]: 2025-11-23 23:06:16.950 [INFO][3858] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:16.998811 containerd[1501]: 2025-11-23 23:06:16.950 [INFO][3858] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" host="localhost" Nov 23 23:06:16.999087 containerd[1501]: 2025-11-23 23:06:16.952 [INFO][3858] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f Nov 23 23:06:16.999087 containerd[1501]: 2025-11-23 23:06:16.959 [INFO][3858] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" host="localhost" Nov 23 23:06:16.999087 containerd[1501]: 2025-11-23 23:06:16.965 [INFO][3858] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" host="localhost" Nov 23 23:06:16.999087 containerd[1501]: 2025-11-23 23:06:16.965 [INFO][3858] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" host="localhost" Nov 23 23:06:16.999087 containerd[1501]: 2025-11-23 23:06:16.965 [INFO][3858] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:06:16.999087 containerd[1501]: 2025-11-23 23:06:16.965 [INFO][3858] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" HandleID="k8s-pod-network.bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" Workload="localhost-k8s-whisker--84c5fcd4d6--qdm8f-eth0" Nov 23 23:06:16.999226 containerd[1501]: 2025-11-23 23:06:16.969 [INFO][3843] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" Namespace="calico-system" Pod="whisker-84c5fcd4d6-qdm8f" WorkloadEndpoint="localhost-k8s-whisker--84c5fcd4d6--qdm8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84c5fcd4d6--qdm8f-eth0", GenerateName:"whisker-84c5fcd4d6-", Namespace:"calico-system", SelfLink:"", UID:"827591c9-2ca8-4c08-9a39-7dd17d688b03", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 6, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84c5fcd4d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-84c5fcd4d6-qdm8f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliad809713679", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:16.999226 containerd[1501]: 2025-11-23 23:06:16.969 [INFO][3843] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" Namespace="calico-system" Pod="whisker-84c5fcd4d6-qdm8f" WorkloadEndpoint="localhost-k8s-whisker--84c5fcd4d6--qdm8f-eth0" Nov 23 23:06:16.999308 containerd[1501]: 2025-11-23 23:06:16.969 [INFO][3843] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad809713679 ContainerID="bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" Namespace="calico-system" Pod="whisker-84c5fcd4d6-qdm8f" WorkloadEndpoint="localhost-k8s-whisker--84c5fcd4d6--qdm8f-eth0" Nov 23 23:06:16.999308 containerd[1501]: 2025-11-23 23:06:16.982 [INFO][3843] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" Namespace="calico-system" Pod="whisker-84c5fcd4d6-qdm8f" WorkloadEndpoint="localhost-k8s-whisker--84c5fcd4d6--qdm8f-eth0" Nov 23 23:06:16.999360 containerd[1501]: 2025-11-23 23:06:16.983 [INFO][3843] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" Namespace="calico-system" Pod="whisker-84c5fcd4d6-qdm8f" WorkloadEndpoint="localhost-k8s-whisker--84c5fcd4d6--qdm8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--84c5fcd4d6--qdm8f-eth0", GenerateName:"whisker-84c5fcd4d6-", Namespace:"calico-system", SelfLink:"", UID:"827591c9-2ca8-4c08-9a39-7dd17d688b03", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 6, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84c5fcd4d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f", Pod:"whisker-84c5fcd4d6-qdm8f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliad809713679", MAC:"b2:42:f8:71:31:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:16.999411 containerd[1501]: 2025-11-23 23:06:16.995 [INFO][3843] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" Namespace="calico-system" Pod="whisker-84c5fcd4d6-qdm8f" WorkloadEndpoint="localhost-k8s-whisker--84c5fcd4d6--qdm8f-eth0" Nov 23 23:06:17.061151 containerd[1501]: time="2025-11-23T23:06:17.060943941Z" level=info msg="connecting to shim bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f" address="unix:///run/containerd/s/83c97e10187ba0be3520b83196a061d3a392bdfdf7832ceeee6fa25fbaac2317" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:06:17.094004 systemd[1]: Started cri-containerd-bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f.scope - libcontainer container bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f. Nov 23 23:06:17.113108 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:06:17.148259 containerd[1501]: time="2025-11-23T23:06:17.148205566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84c5fcd4d6-qdm8f,Uid:827591c9-2ca8-4c08-9a39-7dd17d688b03,Namespace:calico-system,Attempt:0,} returns sandbox id \"bfab72a2a450a9e15a5df7d078461bde37483ecaf60e938eaf3380475bf0636f\"" Nov 23 23:06:17.150339 containerd[1501]: time="2025-11-23T23:06:17.150274191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:06:17.365619 containerd[1501]: time="2025-11-23T23:06:17.365493449Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:17.366642 containerd[1501]: time="2025-11-23T23:06:17.366575205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:06:17.366768 containerd[1501]: time="2025-11-23T23:06:17.366638649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:06:17.367046 kubelet[2643]: E1123 23:06:17.366950 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:06:17.367375 kubelet[2643]: E1123 23:06:17.367123 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:06:17.367820 kubelet[2643]: E1123 23:06:17.367761 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:305450c028ea43fa89592e750d7f9694,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4cd9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c5fcd4d6-qdm8f_calico-system(827591c9-2ca8-4c08-9a39-7dd17d688b03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:17.370257 containerd[1501]: time="2025-11-23T23:06:17.370199619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:06:17.588491 containerd[1501]: time="2025-11-23T23:06:17.588415967Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:17.591444 containerd[1501]: time="2025-11-23T23:06:17.591382974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:06:17.591538 containerd[1501]: time="2025-11-23T23:06:17.591464860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:06:17.591661 kubelet[2643]: E1123 23:06:17.591617 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:06:17.591715 kubelet[2643]: E1123 23:06:17.591673 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:06:17.591879 kubelet[2643]: E1123 23:06:17.591821 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cd9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c5fcd4d6-qdm8f_calico-system(827591c9-2ca8-4c08-9a39-7dd17d688b03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:17.594120 kubelet[2643]: E1123 23:06:17.594057 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c5fcd4d6-qdm8f" podUID="827591c9-2ca8-4c08-9a39-7dd17d688b03" Nov 23 23:06:18.096546 kubelet[2643]: E1123 23:06:18.096436 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c5fcd4d6-qdm8f" podUID="827591c9-2ca8-4c08-9a39-7dd17d688b03" Nov 23 23:06:18.556897 systemd-networkd[1417]: caliad809713679: Gained IPv6LL Nov 23 23:06:19.095633 kubelet[2643]: E1123 23:06:19.095586 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c5fcd4d6-qdm8f" podUID="827591c9-2ca8-4c08-9a39-7dd17d688b03" Nov 23 23:06:20.901655 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:48856.service - OpenSSH per-connection server daemon (10.0.0.1:48856). Nov 23 23:06:20.971304 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 48856 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:06:20.972901 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:20.977645 systemd-logind[1486]: New session 8 of user core. Nov 23 23:06:20.989154 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 23:06:21.140448 sshd[4149]: Connection closed by 10.0.0.1 port 48856 Nov 23 23:06:21.140996 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:21.145082 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:48856.service: Deactivated successfully. Nov 23 23:06:21.146894 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 23:06:21.147655 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Nov 23 23:06:21.149132 systemd-logind[1486]: Removed session 8. Nov 23 23:06:22.937365 containerd[1501]: time="2025-11-23T23:06:22.937307923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b55bbf5b-64w8f,Uid:119ea192-2966-40b8-aace-9d8b5df61791,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:06:22.937720 containerd[1501]: time="2025-11-23T23:06:22.937310723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2c6t,Uid:727c5b39-08b3-46f8-a2e4-48219c6016f9,Namespace:calico-system,Attempt:0,}" Nov 23 23:06:23.082137 systemd-networkd[1417]: cali4c14ee3f6f9: Link UP Nov 23 23:06:23.082377 systemd-networkd[1417]: cali4c14ee3f6f9: Gained carrier Nov 23 23:06:23.102644 containerd[1501]: 2025-11-23 23:06:22.970 [INFO][4214] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:06:23.102644 containerd[1501]: 2025-11-23 23:06:22.990 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-eth0 calico-apiserver-69b55bbf5b- calico-apiserver 119ea192-2966-40b8-aace-9d8b5df61791 797 0 2025-11-23 23:05:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b55bbf5b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69b55bbf5b-64w8f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4c14ee3f6f9 [] [] }} ContainerID="1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-64w8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-" Nov 23 23:06:23.102644 containerd[1501]: 2025-11-23 23:06:22.990 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-64w8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-eth0" Nov 23 23:06:23.102644 containerd[1501]: 2025-11-23 23:06:23.023 [INFO][4241] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" HandleID="k8s-pod-network.1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" Workload="localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-eth0" Nov 23 23:06:23.103072 containerd[1501]: 2025-11-23 23:06:23.023 [INFO][4241] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" HandleID="k8s-pod-network.1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" Workload="localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cca0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69b55bbf5b-64w8f", "timestamp":"2025-11-23 23:06:23.023663025 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:06:23.103072 containerd[1501]: 2025-11-23 23:06:23.024 [INFO][4241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:06:23.103072 containerd[1501]: 2025-11-23 23:06:23.024 [INFO][4241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:06:23.103072 containerd[1501]: 2025-11-23 23:06:23.024 [INFO][4241] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:06:23.103072 containerd[1501]: 2025-11-23 23:06:23.034 [INFO][4241] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" host="localhost" Nov 23 23:06:23.103072 containerd[1501]: 2025-11-23 23:06:23.042 [INFO][4241] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:06:23.103072 containerd[1501]: 2025-11-23 23:06:23.048 [INFO][4241] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:06:23.103072 containerd[1501]: 2025-11-23 23:06:23.051 [INFO][4241] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:23.103072 containerd[1501]: 2025-11-23 23:06:23.054 [INFO][4241] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:23.103072 containerd[1501]: 2025-11-23 23:06:23.054 [INFO][4241] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" host="localhost" Nov 23 23:06:23.103313 containerd[1501]: 2025-11-23 23:06:23.056 [INFO][4241] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c Nov 23 23:06:23.103313 containerd[1501]: 2025-11-23 23:06:23.062 [INFO][4241] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" host="localhost" Nov 23 23:06:23.103313 containerd[1501]: 2025-11-23 23:06:23.072 [INFO][4241] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" host="localhost" Nov 23 23:06:23.103313 containerd[1501]: 2025-11-23 23:06:23.072 [INFO][4241] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" host="localhost" Nov 23 23:06:23.103313 containerd[1501]: 2025-11-23 23:06:23.072 [INFO][4241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:06:23.103313 containerd[1501]: 2025-11-23 23:06:23.072 [INFO][4241] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" HandleID="k8s-pod-network.1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" Workload="localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-eth0" Nov 23 23:06:23.103458 containerd[1501]: 2025-11-23 23:06:23.077 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-64w8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-eth0", GenerateName:"calico-apiserver-69b55bbf5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"119ea192-2966-40b8-aace-9d8b5df61791", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b55bbf5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69b55bbf5b-64w8f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c14ee3f6f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:23.103522 containerd[1501]: 2025-11-23 23:06:23.077 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-64w8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-eth0" Nov 23 23:06:23.103522 containerd[1501]: 2025-11-23 23:06:23.078 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c14ee3f6f9 ContainerID="1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-64w8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-eth0" Nov 23 23:06:23.103522 containerd[1501]: 2025-11-23 23:06:23.082 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-64w8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-eth0" Nov 23 23:06:23.103589 containerd[1501]: 2025-11-23 23:06:23.084 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-64w8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-eth0", GenerateName:"calico-apiserver-69b55bbf5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"119ea192-2966-40b8-aace-9d8b5df61791", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b55bbf5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c", Pod:"calico-apiserver-69b55bbf5b-64w8f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c14ee3f6f9", MAC:"fe:ed:b6:c7:12:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:23.103641 containerd[1501]: 2025-11-23 23:06:23.097 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-64w8f" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--64w8f-eth0" Nov 23 23:06:23.129676 containerd[1501]: time="2025-11-23T23:06:23.129618784Z" level=info msg="connecting to shim 1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c" address="unix:///run/containerd/s/ab75a64f0e399adddb0fd865936a41213e9c3710d6b6b1629fb1be0d7f4ac16e" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:06:23.156020 systemd[1]: Started cri-containerd-1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c.scope - libcontainer container 1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c. Nov 23 23:06:23.178067 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:06:23.183678 systemd-networkd[1417]: caliaa052a9d0f1: Link UP Nov 23 23:06:23.184518 systemd-networkd[1417]: caliaa052a9d0f1: Gained carrier Nov 23 23:06:23.204587 containerd[1501]: 2025-11-23 23:06:22.970 [INFO][4219] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:06:23.204587 containerd[1501]: 2025-11-23 23:06:22.991 [INFO][4219] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--n2c6t-eth0 csi-node-driver- calico-system 727c5b39-08b3-46f8-a2e4-48219c6016f9 698 0 2025-11-23 23:06:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-n2c6t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaa052a9d0f1 [] [] }} ContainerID="de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" Namespace="calico-system" Pod="csi-node-driver-n2c6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2c6t-" Nov 23 23:06:23.204587 containerd[1501]: 2025-11-23 23:06:22.991 [INFO][4219] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" Namespace="calico-system" Pod="csi-node-driver-n2c6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2c6t-eth0" Nov 23 23:06:23.204587 containerd[1501]: 2025-11-23 23:06:23.023 [INFO][4243] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" HandleID="k8s-pod-network.de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" Workload="localhost-k8s-csi--node--driver--n2c6t-eth0" Nov 23 23:06:23.204911 containerd[1501]: 2025-11-23 23:06:23.024 [INFO][4243] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" HandleID="k8s-pod-network.de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" Workload="localhost-k8s-csi--node--driver--n2c6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d270), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-n2c6t", "timestamp":"2025-11-23 23:06:23.023657905 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:06:23.204911 containerd[1501]: 2025-11-23 23:06:23.024 [INFO][4243] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:06:23.204911 containerd[1501]: 2025-11-23 23:06:23.072 [INFO][4243] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:06:23.204911 containerd[1501]: 2025-11-23 23:06:23.073 [INFO][4243] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:06:23.204911 containerd[1501]: 2025-11-23 23:06:23.135 [INFO][4243] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" host="localhost" Nov 23 23:06:23.204911 containerd[1501]: 2025-11-23 23:06:23.144 [INFO][4243] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:06:23.204911 containerd[1501]: 2025-11-23 23:06:23.154 [INFO][4243] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:06:23.204911 containerd[1501]: 2025-11-23 23:06:23.158 [INFO][4243] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:23.204911 containerd[1501]: 2025-11-23 23:06:23.162 [INFO][4243] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:23.204911 containerd[1501]: 2025-11-23 23:06:23.162 [INFO][4243] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" host="localhost" Nov 23 23:06:23.205275 containerd[1501]: 2025-11-23 23:06:23.164 [INFO][4243] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a Nov 23 23:06:23.205275 containerd[1501]: 2025-11-23 23:06:23.170 [INFO][4243] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" host="localhost" Nov 23 23:06:23.205275 containerd[1501]: 2025-11-23 23:06:23.178 [INFO][4243] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" host="localhost" Nov 23 23:06:23.205275 containerd[1501]: 2025-11-23 23:06:23.178 [INFO][4243] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" host="localhost" Nov 23 23:06:23.205275 containerd[1501]: 2025-11-23 23:06:23.178 [INFO][4243] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:06:23.205275 containerd[1501]: 2025-11-23 23:06:23.178 [INFO][4243] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" HandleID="k8s-pod-network.de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" Workload="localhost-k8s-csi--node--driver--n2c6t-eth0" Nov 23 23:06:23.205793 containerd[1501]: 2025-11-23 23:06:23.180 [INFO][4219] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" Namespace="calico-system" Pod="csi-node-driver-n2c6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2c6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n2c6t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"727c5b39-08b3-46f8-a2e4-48219c6016f9", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 6, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-n2c6t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa052a9d0f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:23.205877 containerd[1501]: 2025-11-23 23:06:23.180 [INFO][4219] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" Namespace="calico-system" Pod="csi-node-driver-n2c6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2c6t-eth0" Nov 23 23:06:23.205877 containerd[1501]: 2025-11-23 23:06:23.181 [INFO][4219] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa052a9d0f1 ContainerID="de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" Namespace="calico-system" Pod="csi-node-driver-n2c6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2c6t-eth0" Nov 23 23:06:23.205877 containerd[1501]: 2025-11-23 23:06:23.184 [INFO][4219] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" Namespace="calico-system" Pod="csi-node-driver-n2c6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2c6t-eth0" Nov 23 23:06:23.206014 containerd[1501]: 2025-11-23 23:06:23.185 [INFO][4219] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" Namespace="calico-system" Pod="csi-node-driver-n2c6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2c6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--n2c6t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"727c5b39-08b3-46f8-a2e4-48219c6016f9", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 6, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a", Pod:"csi-node-driver-n2c6t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaa052a9d0f1", MAC:"d2:7d:48:50:d5:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:23.206702 containerd[1501]: 2025-11-23 23:06:23.200 [INFO][4219] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" Namespace="calico-system" Pod="csi-node-driver-n2c6t" WorkloadEndpoint="localhost-k8s-csi--node--driver--n2c6t-eth0" Nov 23 23:06:23.214168 containerd[1501]: time="2025-11-23T23:06:23.213960582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b55bbf5b-64w8f,Uid:119ea192-2966-40b8-aace-9d8b5df61791,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1b60efb753559593f64577493ecd9be7259fe2be994c059d6bf330e17bba893c\"" Nov 23 23:06:23.216928 containerd[1501]: time="2025-11-23T23:06:23.216876755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:06:23.239680 containerd[1501]: time="2025-11-23T23:06:23.239323605Z" level=info msg="connecting to shim de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a" address="unix:///run/containerd/s/ffab79fafe862ae6a8228b00959c516dc895adb5c482bafa1d88bfca8e211025" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:06:23.303989 systemd[1]: Started cri-containerd-de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a.scope - libcontainer container de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a. Nov 23 23:06:23.317920 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:06:23.335053 containerd[1501]: time="2025-11-23T23:06:23.335000314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-n2c6t,Uid:727c5b39-08b3-46f8-a2e4-48219c6016f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"de854f34d7f70981ca9ea0889087e86e8a7c97ff7cd54c2ac517e7667228a70a\"" Nov 23 23:06:23.434520 containerd[1501]: time="2025-11-23T23:06:23.434450488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:23.435601 containerd[1501]: time="2025-11-23T23:06:23.435523591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:06:23.435681 containerd[1501]: time="2025-11-23T23:06:23.435620077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:06:23.435869 kubelet[2643]: E1123 23:06:23.435812 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:23.436212 kubelet[2643]: E1123 23:06:23.435871 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:23.436212 kubelet[2643]: E1123 23:06:23.436146 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhgc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69b55bbf5b-64w8f_calico-apiserver(119ea192-2966-40b8-aace-9d8b5df61791): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:23.438236 containerd[1501]: time="2025-11-23T23:06:23.436373962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:06:23.438334 kubelet[2643]: E1123 23:06:23.437549 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-64w8f" podUID="119ea192-2966-40b8-aace-9d8b5df61791" Nov 23 23:06:23.651284 containerd[1501]: time="2025-11-23T23:06:23.651066444Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:23.652325 containerd[1501]: time="2025-11-23T23:06:23.652121467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:06:23.652325 containerd[1501]: time="2025-11-23T23:06:23.652214832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:06:23.652450 kubelet[2643]: E1123 23:06:23.652414 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:06:23.652495 kubelet[2643]: E1123 23:06:23.652460 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:06:23.652909 kubelet[2643]: E1123 23:06:23.652592 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6mkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2c6t_calico-system(727c5b39-08b3-46f8-a2e4-48219c6016f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:23.654903 containerd[1501]: time="2025-11-23T23:06:23.654490007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:06:23.875038 containerd[1501]: time="2025-11-23T23:06:23.873872087Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:23.878046 containerd[1501]: time="2025-11-23T23:06:23.877783199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:06:23.878046 containerd[1501]: time="2025-11-23T23:06:23.877784119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:06:23.879247 kubelet[2643]: E1123 23:06:23.878948 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:06:23.879247 kubelet[2643]: E1123 23:06:23.879002 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:06:23.880829 kubelet[2643]: E1123 23:06:23.879168 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6mkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2c6t_calico-system(727c5b39-08b3-46f8-a2e4-48219c6016f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:23.882045 kubelet[2643]: E1123 23:06:23.881995 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2c6t" podUID="727c5b39-08b3-46f8-a2e4-48219c6016f9" Nov 23 23:06:23.937623 containerd[1501]: time="2025-11-23T23:06:23.937422013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b55bbf5b-6sfqj,Uid:954fb416-d77f-4b69-ac56-30f7ed8932d5,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:06:23.938946 containerd[1501]: time="2025-11-23T23:06:23.937872520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w2tq7,Uid:9387e88e-25d6-44f5-a40f-c5700146aac3,Namespace:kube-system,Attempt:0,}" Nov 23 23:06:24.082801 systemd-networkd[1417]: cali7c9d3a77d4e: Link UP Nov 23 23:06:24.083001 systemd-networkd[1417]: cali7c9d3a77d4e: Gained carrier Nov 23 23:06:24.101662 containerd[1501]: 2025-11-23 23:06:23.984 [INFO][4394] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:06:24.101662 containerd[1501]: 2025-11-23 23:06:24.002 [INFO][4394] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--w2tq7-eth0 coredns-668d6bf9bc- kube-system 9387e88e-25d6-44f5-a40f-c5700146aac3 790 0 2025-11-23 23:05:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-w2tq7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7c9d3a77d4e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" Namespace="kube-system" Pod="coredns-668d6bf9bc-w2tq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w2tq7-" Nov 23 23:06:24.101662 containerd[1501]: 2025-11-23 23:06:24.002 [INFO][4394] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" Namespace="kube-system" Pod="coredns-668d6bf9bc-w2tq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w2tq7-eth0" Nov 23 23:06:24.101662 containerd[1501]: 2025-11-23 23:06:24.032 [INFO][4423] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" HandleID="k8s-pod-network.25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" Workload="localhost-k8s-coredns--668d6bf9bc--w2tq7-eth0" Nov 23 23:06:24.101948 containerd[1501]: 2025-11-23 23:06:24.032 [INFO][4423] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" HandleID="k8s-pod-network.25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" Workload="localhost-k8s-coredns--668d6bf9bc--w2tq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001376c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-w2tq7", "timestamp":"2025-11-23 23:06:24.032261223 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:06:24.101948 containerd[1501]: 2025-11-23 23:06:24.032 [INFO][4423] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:06:24.101948 containerd[1501]: 2025-11-23 23:06:24.032 [INFO][4423] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:06:24.101948 containerd[1501]: 2025-11-23 23:06:24.032 [INFO][4423] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:06:24.101948 containerd[1501]: 2025-11-23 23:06:24.044 [INFO][4423] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" host="localhost" Nov 23 23:06:24.101948 containerd[1501]: 2025-11-23 23:06:24.050 [INFO][4423] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:06:24.101948 containerd[1501]: 2025-11-23 23:06:24.057 [INFO][4423] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:06:24.101948 containerd[1501]: 2025-11-23 23:06:24.060 [INFO][4423] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:24.101948 containerd[1501]: 2025-11-23 23:06:24.063 [INFO][4423] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:24.101948 containerd[1501]: 2025-11-23 23:06:24.063 [INFO][4423] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" host="localhost" Nov 23 23:06:24.102173 containerd[1501]: 2025-11-23 23:06:24.065 [INFO][4423] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506 Nov 23 23:06:24.102173 containerd[1501]: 2025-11-23 23:06:24.070 [INFO][4423] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" host="localhost" Nov 23 23:06:24.102173 containerd[1501]: 2025-11-23 23:06:24.078 [INFO][4423] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" host="localhost" Nov 23 23:06:24.102173 containerd[1501]: 2025-11-23 23:06:24.078 [INFO][4423] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" host="localhost" Nov 23 23:06:24.102173 containerd[1501]: 2025-11-23 23:06:24.078 [INFO][4423] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:06:24.102173 containerd[1501]: 2025-11-23 23:06:24.078 [INFO][4423] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" HandleID="k8s-pod-network.25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" Workload="localhost-k8s-coredns--668d6bf9bc--w2tq7-eth0" Nov 23 23:06:24.102310 containerd[1501]: 2025-11-23 23:06:24.080 [INFO][4394] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" Namespace="kube-system" Pod="coredns-668d6bf9bc-w2tq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w2tq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--w2tq7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9387e88e-25d6-44f5-a40f-c5700146aac3", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-w2tq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c9d3a77d4e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:24.102376 containerd[1501]: 2025-11-23 23:06:24.080 [INFO][4394] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" Namespace="kube-system" Pod="coredns-668d6bf9bc-w2tq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w2tq7-eth0" Nov 23 23:06:24.102376 containerd[1501]: 2025-11-23 23:06:24.080 [INFO][4394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c9d3a77d4e ContainerID="25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" Namespace="kube-system" Pod="coredns-668d6bf9bc-w2tq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w2tq7-eth0" Nov 23 23:06:24.102376 containerd[1501]: 2025-11-23 23:06:24.083 [INFO][4394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" Namespace="kube-system" Pod="coredns-668d6bf9bc-w2tq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w2tq7-eth0" Nov 23 23:06:24.102449 containerd[1501]: 2025-11-23 23:06:24.085 [INFO][4394] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" Namespace="kube-system" Pod="coredns-668d6bf9bc-w2tq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w2tq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--w2tq7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9387e88e-25d6-44f5-a40f-c5700146aac3", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506", Pod:"coredns-668d6bf9bc-w2tq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c9d3a77d4e", MAC:"3e:7b:a7:65:68:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:24.102449 containerd[1501]: 2025-11-23 23:06:24.099 [INFO][4394] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" Namespace="kube-system" Pod="coredns-668d6bf9bc-w2tq7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--w2tq7-eth0" Nov 23 23:06:24.106830 kubelet[2643]: E1123 23:06:24.106613 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-64w8f" podUID="119ea192-2966-40b8-aace-9d8b5df61791" Nov 23 23:06:24.124884 kubelet[2643]: E1123 23:06:24.123263 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2c6t" podUID="727c5b39-08b3-46f8-a2e4-48219c6016f9" Nov 23 23:06:24.177622 containerd[1501]: time="2025-11-23T23:06:24.177574043Z" level=info msg="connecting to shim 25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506" address="unix:///run/containerd/s/9d6928fc9e60fc918c63bef549cc009aa9494bec0b7a5fb3afaaa0d37ab50edb" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:06:24.188303 systemd-networkd[1417]: calic8a0f2b4a11: Link UP Nov 23 23:06:24.188734 systemd-networkd[1417]: calic8a0f2b4a11: Gained carrier Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:23.990 [INFO][4395] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.008 [INFO][4395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-eth0 calico-apiserver-69b55bbf5b- calico-apiserver 954fb416-d77f-4b69-ac56-30f7ed8932d5 793 0 2025-11-23 23:05:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b55bbf5b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69b55bbf5b-6sfqj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic8a0f2b4a11 [] [] }} ContainerID="f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-6sfqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-" Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.008 [INFO][4395] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-6sfqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-eth0" Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.042 [INFO][4429] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" HandleID="k8s-pod-network.f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" Workload="localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-eth0" Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.042 [INFO][4429] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" HandleID="k8s-pod-network.f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" Workload="localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69b55bbf5b-6sfqj", "timestamp":"2025-11-23 23:06:24.042729787 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.043 [INFO][4429] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.078 [INFO][4429] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.078 [INFO][4429] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.145 [INFO][4429] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" host="localhost" Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.152 [INFO][4429] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.158 [INFO][4429] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.160 [INFO][4429] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.163 [INFO][4429] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.163 [INFO][4429] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" host="localhost" Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.167 [INFO][4429] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.171 [INFO][4429] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" host="localhost" Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.180 [INFO][4429] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" host="localhost" Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.180 [INFO][4429] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" host="localhost" Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.180 [INFO][4429] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:06:24.204324 containerd[1501]: 2025-11-23 23:06:24.180 [INFO][4429] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" HandleID="k8s-pod-network.f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" Workload="localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-eth0" Nov 23 23:06:24.204985 containerd[1501]: 2025-11-23 23:06:24.185 [INFO][4395] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-6sfqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-eth0", GenerateName:"calico-apiserver-69b55bbf5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"954fb416-d77f-4b69-ac56-30f7ed8932d5", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b55bbf5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69b55bbf5b-6sfqj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic8a0f2b4a11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:24.204985 containerd[1501]: 2025-11-23 23:06:24.186 [INFO][4395] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-6sfqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-eth0" Nov 23 23:06:24.204985 containerd[1501]: 2025-11-23 23:06:24.186 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8a0f2b4a11 ContainerID="f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-6sfqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-eth0" Nov 23 23:06:24.204985 containerd[1501]: 2025-11-23 23:06:24.190 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-6sfqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-eth0" Nov 23 23:06:24.204985 containerd[1501]: 2025-11-23 23:06:24.191 [INFO][4395] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-6sfqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-eth0", GenerateName:"calico-apiserver-69b55bbf5b-", Namespace:"calico-apiserver", SelfLink:"", UID:"954fb416-d77f-4b69-ac56-30f7ed8932d5", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b55bbf5b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f", Pod:"calico-apiserver-69b55bbf5b-6sfqj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic8a0f2b4a11", MAC:"42:95:c0:2c:e4:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:24.204985 containerd[1501]: 2025-11-23 23:06:24.201 [INFO][4395] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" Namespace="calico-apiserver" Pod="calico-apiserver-69b55bbf5b-6sfqj" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b55bbf5b--6sfqj-eth0" Nov 23 23:06:24.205986 systemd[1]: Started cri-containerd-25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506.scope - libcontainer container 25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506. Nov 23 23:06:24.222032 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:06:24.234121 containerd[1501]: time="2025-11-23T23:06:24.234066460Z" level=info msg="connecting to shim f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f" address="unix:///run/containerd/s/dfa31bbbde4aecfa659b2a847c54ebc2ee9b3f191912221cf1d2e6a93373c8f4" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:06:24.247924 containerd[1501]: time="2025-11-23T23:06:24.247679525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-w2tq7,Uid:9387e88e-25d6-44f5-a40f-c5700146aac3,Namespace:kube-system,Attempt:0,} returns sandbox id \"25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506\"" Nov 23 23:06:24.253677 containerd[1501]: time="2025-11-23T23:06:24.253610667Z" level=info msg="CreateContainer within sandbox \"25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:06:24.257979 systemd[1]: Started cri-containerd-f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f.scope - libcontainer container f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f. Nov 23 23:06:24.265966 containerd[1501]: time="2025-11-23T23:06:24.265912857Z" level=info msg="Container 385af97aa8a9851921e65c83ea1eb702ef7b210ed86c4e9d87509e6c34350ddc: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:06:24.272473 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:06:24.278543 containerd[1501]: time="2025-11-23T23:06:24.278479981Z" level=info msg="CreateContainer within sandbox \"25be2c6fd16077d425512eaa2a333a5f6794d9fd0325d5dddbf2340ae4049506\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"385af97aa8a9851921e65c83ea1eb702ef7b210ed86c4e9d87509e6c34350ddc\"" Nov 23 23:06:24.279164 containerd[1501]: time="2025-11-23T23:06:24.279114338Z" level=info msg="StartContainer for \"385af97aa8a9851921e65c83ea1eb702ef7b210ed86c4e9d87509e6c34350ddc\"" Nov 23 23:06:24.280331 containerd[1501]: time="2025-11-23T23:06:24.280303166Z" level=info msg="connecting to shim 385af97aa8a9851921e65c83ea1eb702ef7b210ed86c4e9d87509e6c34350ddc" address="unix:///run/containerd/s/9d6928fc9e60fc918c63bef549cc009aa9494bec0b7a5fb3afaaa0d37ab50edb" protocol=ttrpc version=3 Nov 23 23:06:24.304989 systemd[1]: Started cri-containerd-385af97aa8a9851921e65c83ea1eb702ef7b210ed86c4e9d87509e6c34350ddc.scope - libcontainer container 385af97aa8a9851921e65c83ea1eb702ef7b210ed86c4e9d87509e6c34350ddc. Nov 23 23:06:24.308463 containerd[1501]: time="2025-11-23T23:06:24.308387786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b55bbf5b-6sfqj,Uid:954fb416-d77f-4b69-ac56-30f7ed8932d5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f9f7481a0d5ac9f784fdad2aed9bbcb07b88c04157cba7ddb2e461798db66f1f\"" Nov 23 23:06:24.309793 containerd[1501]: time="2025-11-23T23:06:24.309741424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:06:24.340532 containerd[1501]: time="2025-11-23T23:06:24.340092254Z" level=info msg="StartContainer for \"385af97aa8a9851921e65c83ea1eb702ef7b210ed86c4e9d87509e6c34350ddc\" returns successfully" Nov 23 23:06:24.519184 containerd[1501]: time="2025-11-23T23:06:24.519118097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:24.520180 containerd[1501]: time="2025-11-23T23:06:24.520112235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:06:24.520244 containerd[1501]: time="2025-11-23T23:06:24.520163598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:06:24.520383 kubelet[2643]: E1123 23:06:24.520336 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:24.520735 kubelet[2643]: E1123 23:06:24.520395 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:24.520735 kubelet[2643]: E1123 23:06:24.520562 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jnqf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69b55bbf5b-6sfqj_calico-apiserver(954fb416-d77f-4b69-ac56-30f7ed8932d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:24.522033 kubelet[2643]: E1123 23:06:24.521968 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-6sfqj" podUID="954fb416-d77f-4b69-ac56-30f7ed8932d5" Nov 23 23:06:24.893004 systemd-networkd[1417]: caliaa052a9d0f1: Gained IPv6LL Nov 23 23:06:24.948214 containerd[1501]: time="2025-11-23T23:06:24.948179759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f5b99886-phxmt,Uid:acc2e189-3c10-4cac-9d7e-5131c7f8c476,Namespace:calico-system,Attempt:0,}" Nov 23 23:06:24.948588 containerd[1501]: time="2025-11-23T23:06:24.948230882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jfgkh,Uid:a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16,Namespace:calico-system,Attempt:0,}" Nov 23 23:06:25.022099 systemd-networkd[1417]: cali4c14ee3f6f9: Gained IPv6LL Nov 23 23:06:25.106422 systemd-networkd[1417]: calic120c8278f6: Link UP Nov 23 23:06:25.106577 systemd-networkd[1417]: calic120c8278f6: Gained carrier Nov 23 23:06:25.113232 kubelet[2643]: E1123 23:06:25.112613 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-6sfqj" podUID="954fb416-d77f-4b69-ac56-30f7ed8932d5" Nov 23 23:06:25.118775 kubelet[2643]: E1123 23:06:25.118725 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-64w8f" podUID="119ea192-2966-40b8-aace-9d8b5df61791" Nov 23 23:06:25.120308 kubelet[2643]: E1123 23:06:25.120244 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2c6t" podUID="727c5b39-08b3-46f8-a2e4-48219c6016f9" Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.003 [INFO][4601] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.022 [INFO][4601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--jfgkh-eth0 goldmane-666569f655- calico-system a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16 795 0 2025-11-23 23:05:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-jfgkh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic120c8278f6 [] [] }} ContainerID="df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" Namespace="calico-system" Pod="goldmane-666569f655-jfgkh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfgkh-" Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.022 [INFO][4601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" Namespace="calico-system" Pod="goldmane-666569f655-jfgkh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfgkh-eth0" Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.054 [INFO][4632] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" HandleID="k8s-pod-network.df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" Workload="localhost-k8s-goldmane--666569f655--jfgkh-eth0" Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.054 [INFO][4632] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" HandleID="k8s-pod-network.df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" Workload="localhost-k8s-goldmane--666569f655--jfgkh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400051cb10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-jfgkh", "timestamp":"2025-11-23 23:06:25.054278914 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.054 [INFO][4632] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.055 [INFO][4632] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.055 [INFO][4632] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.073 [INFO][4632] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" host="localhost" Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.077 [INFO][4632] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.082 [INFO][4632] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.084 [INFO][4632] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.086 [INFO][4632] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.086 [INFO][4632] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" host="localhost" Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.089 [INFO][4632] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.093 [INFO][4632] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" host="localhost" Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.100 [INFO][4632] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" host="localhost" Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.101 [INFO][4632] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" host="localhost" Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.101 [INFO][4632] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:06:25.129229 containerd[1501]: 2025-11-23 23:06:25.101 [INFO][4632] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" HandleID="k8s-pod-network.df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" Workload="localhost-k8s-goldmane--666569f655--jfgkh-eth0" Nov 23 23:06:25.130854 containerd[1501]: 2025-11-23 23:06:25.103 [INFO][4601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" Namespace="calico-system" Pod="goldmane-666569f655-jfgkh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfgkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--jfgkh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-jfgkh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic120c8278f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:25.130854 containerd[1501]: 2025-11-23 23:06:25.104 [INFO][4601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" Namespace="calico-system" Pod="goldmane-666569f655-jfgkh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfgkh-eth0" Nov 23 23:06:25.130854 containerd[1501]: 2025-11-23 23:06:25.104 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic120c8278f6 ContainerID="df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" Namespace="calico-system" Pod="goldmane-666569f655-jfgkh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfgkh-eth0" Nov 23 23:06:25.130854 containerd[1501]: 2025-11-23 23:06:25.106 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" Namespace="calico-system" Pod="goldmane-666569f655-jfgkh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfgkh-eth0" Nov 23 23:06:25.130854 containerd[1501]: 2025-11-23 23:06:25.106 [INFO][4601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" Namespace="calico-system" Pod="goldmane-666569f655-jfgkh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfgkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--jfgkh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd", Pod:"goldmane-666569f655-jfgkh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic120c8278f6", MAC:"92:14:4d:ed:11:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:25.130854 containerd[1501]: 2025-11-23 23:06:25.123 [INFO][4601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" Namespace="calico-system" Pod="goldmane-666569f655-jfgkh" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--jfgkh-eth0" Nov 23 23:06:25.158359 kubelet[2643]: I1123 23:06:25.158219 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-w2tq7" podStartSLOduration=39.158205147 podStartE2EDuration="39.158205147s" podCreationTimestamp="2025-11-23 23:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:06:25.157794404 +0000 UTC m=+44.328061029" watchObservedRunningTime="2025-11-23 23:06:25.158205147 +0000 UTC m=+44.328471772" Nov 23 23:06:25.174943 containerd[1501]: time="2025-11-23T23:06:25.174805078Z" level=info msg="connecting to shim df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd" address="unix:///run/containerd/s/b0b7b807726330258ee85e9b442fd01d8f611b0bd30dac9ded5477142d3059ea" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:06:25.209958 systemd[1]: Started cri-containerd-df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd.scope - libcontainer container df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd. Nov 23 23:06:25.228669 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:06:25.238005 systemd-networkd[1417]: cali435ebefa4e0: Link UP Nov 23 23:06:25.239055 systemd-networkd[1417]: cali435ebefa4e0: Gained carrier Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:24.996 [INFO][4594] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.016 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-eth0 calico-kube-controllers-77f5b99886- calico-system acc2e189-3c10-4cac-9d7e-5131c7f8c476 796 0 2025-11-23 23:06:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77f5b99886 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-77f5b99886-phxmt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali435ebefa4e0 [] [] }} ContainerID="3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" Namespace="calico-system" Pod="calico-kube-controllers-77f5b99886-phxmt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-" Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.016 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" Namespace="calico-system" Pod="calico-kube-controllers-77f5b99886-phxmt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-eth0" Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.070 [INFO][4630] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" HandleID="k8s-pod-network.3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" Workload="localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-eth0" Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.071 [INFO][4630] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" HandleID="k8s-pod-network.3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" Workload="localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-77f5b99886-phxmt", "timestamp":"2025-11-23 23:06:25.070886886 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.071 [INFO][4630] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.101 [INFO][4630] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.101 [INFO][4630] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.174 [INFO][4630] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" host="localhost" Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.200 [INFO][4630] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.208 [INFO][4630] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.211 [INFO][4630] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.216 [INFO][4630] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.216 [INFO][4630] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" host="localhost" Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.219 [INFO][4630] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2 Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.223 [INFO][4630] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" host="localhost" Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.232 [INFO][4630] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" host="localhost" Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.232 [INFO][4630] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" host="localhost" Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.232 [INFO][4630] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:06:25.258357 containerd[1501]: 2025-11-23 23:06:25.232 [INFO][4630] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" HandleID="k8s-pod-network.3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" Workload="localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-eth0" Nov 23 23:06:25.258994 containerd[1501]: 2025-11-23 23:06:25.234 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" Namespace="calico-system" Pod="calico-kube-controllers-77f5b99886-phxmt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-eth0", GenerateName:"calico-kube-controllers-77f5b99886-", Namespace:"calico-system", SelfLink:"", UID:"acc2e189-3c10-4cac-9d7e-5131c7f8c476", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 6, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f5b99886", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-77f5b99886-phxmt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali435ebefa4e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:25.258994 containerd[1501]: 2025-11-23 23:06:25.234 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" Namespace="calico-system" Pod="calico-kube-controllers-77f5b99886-phxmt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-eth0" Nov 23 23:06:25.258994 containerd[1501]: 2025-11-23 23:06:25.234 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali435ebefa4e0 ContainerID="3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" Namespace="calico-system" Pod="calico-kube-controllers-77f5b99886-phxmt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-eth0" Nov 23 23:06:25.258994 containerd[1501]: 2025-11-23 23:06:25.240 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" Namespace="calico-system" Pod="calico-kube-controllers-77f5b99886-phxmt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-eth0" Nov 23 23:06:25.258994 containerd[1501]: 2025-11-23 23:06:25.241 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" Namespace="calico-system" Pod="calico-kube-controllers-77f5b99886-phxmt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-eth0", GenerateName:"calico-kube-controllers-77f5b99886-", Namespace:"calico-system", SelfLink:"", UID:"acc2e189-3c10-4cac-9d7e-5131c7f8c476", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 6, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77f5b99886", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2", Pod:"calico-kube-controllers-77f5b99886-phxmt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali435ebefa4e0", MAC:"6a:71:d1:d1:2e:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:25.258994 containerd[1501]: 2025-11-23 23:06:25.255 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" Namespace="calico-system" Pod="calico-kube-controllers-77f5b99886-phxmt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77f5b99886--phxmt-eth0" Nov 23 23:06:25.272178 containerd[1501]: time="2025-11-23T23:06:25.272128260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jfgkh,Uid:a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16,Namespace:calico-system,Attempt:0,} returns sandbox id \"df58b8ad03ee5b42ca107215ce275425ed7e9e694d31762fae31ab3f19b0f9fd\"" Nov 23 23:06:25.273986 containerd[1501]: time="2025-11-23T23:06:25.273950962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:06:25.289315 containerd[1501]: time="2025-11-23T23:06:25.289252381Z" level=info msg="connecting to shim 3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2" address="unix:///run/containerd/s/9bca868461184e3338df1427381f26e16ef054e7bd4e87eab8b9c5d19b99ea6b" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:06:25.314980 systemd[1]: Started cri-containerd-3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2.scope - libcontainer container 3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2. Nov 23 23:06:25.327148 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:06:25.350760 containerd[1501]: time="2025-11-23T23:06:25.350713550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77f5b99886-phxmt,Uid:acc2e189-3c10-4cac-9d7e-5131c7f8c476,Namespace:calico-system,Attempt:0,} returns sandbox id \"3a4c39d7503e8f03449e4922483cae685cb6e26e788fea485bd6637bb7865cf2\"" Nov 23 23:06:25.404897 systemd-networkd[1417]: cali7c9d3a77d4e: Gained IPv6LL Nov 23 23:06:25.497532 containerd[1501]: time="2025-11-23T23:06:25.497482987Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:25.498594 containerd[1501]: time="2025-11-23T23:06:25.498555727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:06:25.498694 containerd[1501]: time="2025-11-23T23:06:25.498593289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:06:25.498872 kubelet[2643]: E1123 23:06:25.498819 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:06:25.498933 kubelet[2643]: E1123 23:06:25.498874 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:06:25.499316 kubelet[2643]: E1123 23:06:25.499068 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bnv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jfgkh_calico-system(a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:25.499553 containerd[1501]: time="2025-11-23T23:06:25.499522181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:06:25.500355 kubelet[2643]: E1123 23:06:25.500322 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfgkh" podUID="a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16" Nov 23 23:06:25.597944 systemd-networkd[1417]: calic8a0f2b4a11: Gained IPv6LL Nov 23 23:06:25.673426 kubelet[2643]: I1123 23:06:25.673383 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:06:25.706707 containerd[1501]: time="2025-11-23T23:06:25.706391751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:25.721100 containerd[1501]: time="2025-11-23T23:06:25.720981849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:06:25.721100 containerd[1501]: time="2025-11-23T23:06:25.721067414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:06:25.721319 kubelet[2643]: E1123 23:06:25.721263 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:06:25.721375 kubelet[2643]: E1123 23:06:25.721332 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:06:25.721522 kubelet[2643]: E1123 23:06:25.721466 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4987,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77f5b99886-phxmt_calico-system(acc2e189-3c10-4cac-9d7e-5131c7f8c476): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:25.723012 kubelet[2643]: E1123 23:06:25.722963 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f5b99886-phxmt" podUID="acc2e189-3c10-4cac-9d7e-5131c7f8c476" Nov 23 23:06:26.123547 kubelet[2643]: E1123 23:06:26.123501 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f5b99886-phxmt" podUID="acc2e189-3c10-4cac-9d7e-5131c7f8c476" Nov 23 23:06:26.127606 kubelet[2643]: E1123 23:06:26.127567 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-6sfqj" podUID="954fb416-d77f-4b69-ac56-30f7ed8932d5" Nov 23 23:06:26.128028 kubelet[2643]: E1123 23:06:26.127641 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfgkh" podUID="a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16" Nov 23 23:06:26.156026 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:48870.service - OpenSSH per-connection server daemon (10.0.0.1:48870). Nov 23 23:06:26.228797 sshd[4788]: Accepted publickey for core from 10.0.0.1 port 48870 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:06:26.232010 sshd-session[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:26.237993 systemd-logind[1486]: New session 9 of user core. Nov 23 23:06:26.247126 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 23:06:26.399086 sshd[4808]: Connection closed by 10.0.0.1 port 48870 Nov 23 23:06:26.400135 sshd-session[4788]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:26.405363 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:48870.service: Deactivated successfully. Nov 23 23:06:26.409006 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 23:06:26.409669 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Nov 23 23:06:26.411090 systemd-logind[1486]: Removed session 9. Nov 23 23:06:26.684871 systemd-networkd[1417]: cali435ebefa4e0: Gained IPv6LL Nov 23 23:06:26.909258 systemd-networkd[1417]: vxlan.calico: Link UP Nov 23 23:06:26.909267 systemd-networkd[1417]: vxlan.calico: Gained carrier Nov 23 23:06:26.942945 containerd[1501]: time="2025-11-23T23:06:26.942785966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l2lxn,Uid:cd12f407-ade9-47ae-874f-ddb6030dd593,Namespace:kube-system,Attempt:0,}" Nov 23 23:06:27.117017 systemd-networkd[1417]: cali9ea45eb10e6: Link UP Nov 23 23:06:27.119534 systemd-networkd[1417]: cali9ea45eb10e6: Gained carrier Nov 23 23:06:27.133452 systemd-networkd[1417]: calic120c8278f6: Gained IPv6LL Nov 23 23:06:27.138929 kubelet[2643]: E1123 23:06:27.138878 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfgkh" podUID="a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16" Nov 23 23:06:27.139479 kubelet[2643]: E1123 23:06:27.139288 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f5b99886-phxmt" podUID="acc2e189-3c10-4cac-9d7e-5131c7f8c476" Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:26.993 [INFO][4881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--l2lxn-eth0 coredns-668d6bf9bc- kube-system cd12f407-ade9-47ae-874f-ddb6030dd593 786 0 2025-11-23 23:05:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-l2lxn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9ea45eb10e6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2lxn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2lxn-" Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:26.993 [INFO][4881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2lxn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2lxn-eth0" Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.034 [INFO][4894] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" HandleID="k8s-pod-network.ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" Workload="localhost-k8s-coredns--668d6bf9bc--l2lxn-eth0" Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.035 [INFO][4894] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" HandleID="k8s-pod-network.ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" Workload="localhost-k8s-coredns--668d6bf9bc--l2lxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3130), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-l2lxn", "timestamp":"2025-11-23 23:06:27.034981193 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.035 [INFO][4894] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.035 [INFO][4894] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.035 [INFO][4894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.052 [INFO][4894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" host="localhost" Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.061 [INFO][4894] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.083 [INFO][4894] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.085 [INFO][4894] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.093 [INFO][4894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.093 [INFO][4894] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" host="localhost" Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.096 [INFO][4894] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3 Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.101 [INFO][4894] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" host="localhost" Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.110 [INFO][4894] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" host="localhost" Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.110 [INFO][4894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" host="localhost" Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.110 [INFO][4894] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:06:27.140969 containerd[1501]: 2025-11-23 23:06:27.110 [INFO][4894] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" HandleID="k8s-pod-network.ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" Workload="localhost-k8s-coredns--668d6bf9bc--l2lxn-eth0" Nov 23 23:06:27.141541 containerd[1501]: 2025-11-23 23:06:27.114 [INFO][4881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2lxn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2lxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--l2lxn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cd12f407-ade9-47ae-874f-ddb6030dd593", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-l2lxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ea45eb10e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:27.141541 containerd[1501]: 2025-11-23 23:06:27.114 [INFO][4881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2lxn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2lxn-eth0" Nov 23 23:06:27.141541 containerd[1501]: 2025-11-23 23:06:27.114 [INFO][4881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ea45eb10e6 ContainerID="ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2lxn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2lxn-eth0" Nov 23 23:06:27.141541 containerd[1501]: 2025-11-23 23:06:27.118 [INFO][4881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2lxn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2lxn-eth0" Nov 23 23:06:27.141541 containerd[1501]: 2025-11-23 23:06:27.118 [INFO][4881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2lxn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2lxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--l2lxn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cd12f407-ade9-47ae-874f-ddb6030dd593", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 5, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3", Pod:"coredns-668d6bf9bc-l2lxn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ea45eb10e6", MAC:"b6:bb:48:e7:38:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:06:27.141541 containerd[1501]: 2025-11-23 23:06:27.130 [INFO][4881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-l2lxn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--l2lxn-eth0" Nov 23 23:06:27.200107 containerd[1501]: time="2025-11-23T23:06:27.199985886Z" level=info msg="connecting to shim ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3" address="unix:///run/containerd/s/03b17e17bcfe73080ee263c76d04efa24fc9817482c9d9ad82417e7944863472" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:06:27.246016 systemd[1]: Started cri-containerd-ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3.scope - libcontainer container ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3. Nov 23 23:06:27.266126 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 23 23:06:27.305068 containerd[1501]: time="2025-11-23T23:06:27.304995710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l2lxn,Uid:cd12f407-ade9-47ae-874f-ddb6030dd593,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3\"" Nov 23 23:06:27.308092 containerd[1501]: time="2025-11-23T23:06:27.308049272Z" level=info msg="CreateContainer within sandbox \"ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:06:27.328442 containerd[1501]: time="2025-11-23T23:06:27.327722318Z" level=info msg="Container cb8c2b84c19fd83091902136f3ef5061252159221377eceb1df46205db67cff2: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:06:27.339065 containerd[1501]: time="2025-11-23T23:06:27.338985077Z" level=info msg="CreateContainer within sandbox \"ae43995821f8577886e7407ec961704cf07b0cf2b501426540ceda7d48fae5c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb8c2b84c19fd83091902136f3ef5061252159221377eceb1df46205db67cff2\"" Nov 23 23:06:27.339740 containerd[1501]: time="2025-11-23T23:06:27.339710156Z" level=info msg="StartContainer for \"cb8c2b84c19fd83091902136f3ef5061252159221377eceb1df46205db67cff2\"" Nov 23 23:06:27.343294 containerd[1501]: time="2025-11-23T23:06:27.343210102Z" level=info msg="connecting to shim cb8c2b84c19fd83091902136f3ef5061252159221377eceb1df46205db67cff2" address="unix:///run/containerd/s/03b17e17bcfe73080ee263c76d04efa24fc9817482c9d9ad82417e7944863472" protocol=ttrpc version=3 Nov 23 23:06:27.365039 systemd[1]: Started cri-containerd-cb8c2b84c19fd83091902136f3ef5061252159221377eceb1df46205db67cff2.scope - libcontainer container cb8c2b84c19fd83091902136f3ef5061252159221377eceb1df46205db67cff2. Nov 23 23:06:27.395266 containerd[1501]: time="2025-11-23T23:06:27.395154424Z" level=info msg="StartContainer for \"cb8c2b84c19fd83091902136f3ef5061252159221377eceb1df46205db67cff2\" returns successfully" Nov 23 23:06:28.170200 kubelet[2643]: I1123 23:06:28.170126 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-l2lxn" podStartSLOduration=42.17005703 podStartE2EDuration="42.17005703s" podCreationTimestamp="2025-11-23 23:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:06:28.168637157 +0000 UTC m=+47.338903782" watchObservedRunningTime="2025-11-23 23:06:28.17005703 +0000 UTC m=+47.340323655" Nov 23 23:06:28.669209 systemd-networkd[1417]: cali9ea45eb10e6: Gained IPv6LL Nov 23 23:06:28.796957 systemd-networkd[1417]: vxlan.calico: Gained IPv6LL Nov 23 23:06:31.416620 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:40870.service - OpenSSH per-connection server daemon (10.0.0.1:40870). Nov 23 23:06:31.486589 sshd[5044]: Accepted publickey for core from 10.0.0.1 port 40870 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:06:31.488402 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:31.493068 systemd-logind[1486]: New session 10 of user core. Nov 23 23:06:31.503957 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 23:06:31.656742 sshd[5047]: Connection closed by 10.0.0.1 port 40870 Nov 23 23:06:31.657155 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:31.666486 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:40870.service: Deactivated successfully. Nov 23 23:06:31.668984 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 23:06:31.670944 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Nov 23 23:06:31.674564 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:40874.service - OpenSSH per-connection server daemon (10.0.0.1:40874). Nov 23 23:06:31.675329 systemd-logind[1486]: Removed session 10. Nov 23 23:06:31.759912 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 40874 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:06:31.762070 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:31.766840 systemd-logind[1486]: New session 11 of user core. Nov 23 23:06:31.777011 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 23:06:31.988529 sshd[5065]: Connection closed by 10.0.0.1 port 40874 Nov 23 23:06:31.989110 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:32.002695 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:40874.service: Deactivated successfully. Nov 23 23:06:32.013240 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 23:06:32.016918 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Nov 23 23:06:32.020655 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:40882.service - OpenSSH per-connection server daemon (10.0.0.1:40882). Nov 23 23:06:32.023712 systemd-logind[1486]: Removed session 11. Nov 23 23:06:32.073158 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 40882 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:06:32.074562 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:32.080042 systemd-logind[1486]: New session 12 of user core. Nov 23 23:06:32.093975 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 23:06:32.243153 sshd[5081]: Connection closed by 10.0.0.1 port 40882 Nov 23 23:06:32.243444 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:32.247565 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:40882.service: Deactivated successfully. Nov 23 23:06:32.249953 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 23:06:32.250981 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Nov 23 23:06:32.253083 systemd-logind[1486]: Removed session 12. Nov 23 23:06:32.938552 containerd[1501]: time="2025-11-23T23:06:32.938486781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:06:33.143739 containerd[1501]: time="2025-11-23T23:06:33.143646367Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:33.144764 containerd[1501]: time="2025-11-23T23:06:33.144665293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:06:33.144764 containerd[1501]: time="2025-11-23T23:06:33.144704895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:06:33.145104 kubelet[2643]: E1123 23:06:33.144906 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:06:33.145104 kubelet[2643]: E1123 23:06:33.144966 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:06:33.145104 kubelet[2643]: E1123 23:06:33.145073 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:305450c028ea43fa89592e750d7f9694,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4cd9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c5fcd4d6-qdm8f_calico-system(827591c9-2ca8-4c08-9a39-7dd17d688b03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:33.149255 containerd[1501]: time="2025-11-23T23:06:33.148993289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:06:33.335134 containerd[1501]: time="2025-11-23T23:06:33.334994850Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:33.337473 containerd[1501]: time="2025-11-23T23:06:33.337420960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:06:33.337568 containerd[1501]: time="2025-11-23T23:06:33.337489123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:06:33.338274 kubelet[2643]: E1123 23:06:33.337736 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:06:33.338274 kubelet[2643]: E1123 23:06:33.337809 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:06:33.338274 kubelet[2643]: E1123 23:06:33.337922 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cd9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c5fcd4d6-qdm8f_calico-system(827591c9-2ca8-4c08-9a39-7dd17d688b03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:33.339459 kubelet[2643]: E1123 23:06:33.339412 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c5fcd4d6-qdm8f" podUID="827591c9-2ca8-4c08-9a39-7dd17d688b03" Nov 23 23:06:37.266650 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:40898.service - OpenSSH per-connection server daemon (10.0.0.1:40898). Nov 23 23:06:37.329841 sshd[5106]: Accepted publickey for core from 10.0.0.1 port 40898 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:06:37.331230 sshd-session[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:37.337890 systemd-logind[1486]: New session 13 of user core. Nov 23 23:06:37.352211 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 23:06:37.505630 sshd[5115]: Connection closed by 10.0.0.1 port 40898 Nov 23 23:06:37.506604 sshd-session[5106]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:37.509921 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:40898.service: Deactivated successfully. Nov 23 23:06:37.512108 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 23:06:37.514120 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Nov 23 23:06:37.515582 systemd-logind[1486]: Removed session 13. Nov 23 23:06:39.951055 containerd[1501]: time="2025-11-23T23:06:39.950931954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:06:40.151228 containerd[1501]: time="2025-11-23T23:06:40.151169926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:40.156186 containerd[1501]: time="2025-11-23T23:06:40.156117233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:06:40.156266 containerd[1501]: time="2025-11-23T23:06:40.156206837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:06:40.156445 kubelet[2643]: E1123 23:06:40.156390 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:40.156445 kubelet[2643]: E1123 23:06:40.156442 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:40.156810 kubelet[2643]: E1123 23:06:40.156670 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhgc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69b55bbf5b-64w8f_calico-apiserver(119ea192-2966-40b8-aace-9d8b5df61791): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:40.156912 containerd[1501]: time="2025-11-23T23:06:40.156822300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:06:40.158393 kubelet[2643]: E1123 23:06:40.158074 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-64w8f" podUID="119ea192-2966-40b8-aace-9d8b5df61791" Nov 23 23:06:40.362480 containerd[1501]: time="2025-11-23T23:06:40.362423588Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:40.373738 containerd[1501]: time="2025-11-23T23:06:40.373677015Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:06:40.373891 containerd[1501]: time="2025-11-23T23:06:40.373816820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:06:40.374026 kubelet[2643]: E1123 23:06:40.373985 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:06:40.374076 kubelet[2643]: E1123 23:06:40.374040 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:06:40.374651 containerd[1501]: time="2025-11-23T23:06:40.374457405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:06:40.374747 kubelet[2643]: E1123 23:06:40.374515 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6mkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2c6t_calico-system(727c5b39-08b3-46f8-a2e4-48219c6016f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:40.597516 containerd[1501]: time="2025-11-23T23:06:40.597437312Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:40.598725 containerd[1501]: time="2025-11-23T23:06:40.598571995Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:06:40.598725 containerd[1501]: time="2025-11-23T23:06:40.598639358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:06:40.598943 kubelet[2643]: E1123 23:06:40.598853 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:06:40.598943 kubelet[2643]: E1123 23:06:40.598922 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:06:40.599543 kubelet[2643]: E1123 23:06:40.599186 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bnv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jfgkh_calico-system(a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:40.600456 containerd[1501]: time="2025-11-23T23:06:40.600179736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:06:40.601343 kubelet[2643]: E1123 23:06:40.601267 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfgkh" podUID="a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16" Nov 23 23:06:40.836176 containerd[1501]: time="2025-11-23T23:06:40.836115975Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:40.838577 containerd[1501]: time="2025-11-23T23:06:40.838514947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:06:40.838623 containerd[1501]: time="2025-11-23T23:06:40.838565748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:06:40.838811 kubelet[2643]: E1123 23:06:40.838737 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:40.838868 kubelet[2643]: E1123 23:06:40.838823 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:06:40.839358 containerd[1501]: time="2025-11-23T23:06:40.839137410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:06:40.839430 kubelet[2643]: E1123 23:06:40.839385 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jnqf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69b55bbf5b-6sfqj_calico-apiserver(954fb416-d77f-4b69-ac56-30f7ed8932d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:40.840950 kubelet[2643]: E1123 23:06:40.840892 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-6sfqj" podUID="954fb416-d77f-4b69-ac56-30f7ed8932d5" Nov 23 23:06:41.055582 containerd[1501]: time="2025-11-23T23:06:41.055536097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:41.058775 containerd[1501]: time="2025-11-23T23:06:41.058680213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:06:41.059413 containerd[1501]: time="2025-11-23T23:06:41.058731215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:06:41.059637 kubelet[2643]: E1123 23:06:41.059574 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:06:41.059637 kubelet[2643]: E1123 23:06:41.059630 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:06:41.059852 kubelet[2643]: E1123 23:06:41.059770 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6mkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2c6t_calico-system(727c5b39-08b3-46f8-a2e4-48219c6016f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:41.060994 kubelet[2643]: E1123 23:06:41.060929 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2c6t" podUID="727c5b39-08b3-46f8-a2e4-48219c6016f9" Nov 23 23:06:41.938537 containerd[1501]: time="2025-11-23T23:06:41.938404842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:06:42.148908 containerd[1501]: time="2025-11-23T23:06:42.148805502Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:42.150039 containerd[1501]: time="2025-11-23T23:06:42.149942543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:06:42.150039 containerd[1501]: time="2025-11-23T23:06:42.149999866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:06:42.150357 kubelet[2643]: E1123 23:06:42.150303 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:06:42.150716 kubelet[2643]: E1123 23:06:42.150368 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:06:42.150716 kubelet[2643]: E1123 23:06:42.150510 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4987,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77f5b99886-phxmt_calico-system(acc2e189-3c10-4cac-9d7e-5131c7f8c476): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:42.151832 kubelet[2643]: E1123 23:06:42.151712 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f5b99886-phxmt" podUID="acc2e189-3c10-4cac-9d7e-5131c7f8c476" Nov 23 23:06:42.527223 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:58654.service - OpenSSH per-connection server daemon (10.0.0.1:58654). Nov 23 23:06:42.596699 sshd[5132]: Accepted publickey for core from 10.0.0.1 port 58654 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:06:42.598279 sshd-session[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:42.604806 systemd-logind[1486]: New session 14 of user core. Nov 23 23:06:42.623051 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 23:06:42.777420 sshd[5135]: Connection closed by 10.0.0.1 port 58654 Nov 23 23:06:42.778053 sshd-session[5132]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:42.782658 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:58654.service: Deactivated successfully. Nov 23 23:06:42.784523 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 23:06:42.786031 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Nov 23 23:06:42.787566 systemd-logind[1486]: Removed session 14. Nov 23 23:06:45.938645 kubelet[2643]: E1123 23:06:45.938565 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c5fcd4d6-qdm8f" podUID="827591c9-2ca8-4c08-9a39-7dd17d688b03" Nov 23 23:06:47.794373 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:58656.service - OpenSSH per-connection server daemon (10.0.0.1:58656). Nov 23 23:06:47.874864 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 58656 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:06:47.878354 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:47.883911 systemd-logind[1486]: New session 15 of user core. Nov 23 23:06:47.893976 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 23:06:48.056708 sshd[5163]: Connection closed by 10.0.0.1 port 58656 Nov 23 23:06:48.057349 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:48.060914 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Nov 23 23:06:48.061227 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:58656.service: Deactivated successfully. Nov 23 23:06:48.062946 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 23:06:48.064578 systemd-logind[1486]: Removed session 15. Nov 23 23:06:51.936848 kubelet[2643]: E1123 23:06:51.936789 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-64w8f" podUID="119ea192-2966-40b8-aace-9d8b5df61791" Nov 23 23:06:51.937881 kubelet[2643]: E1123 23:06:51.937710 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2c6t" podUID="727c5b39-08b3-46f8-a2e4-48219c6016f9" Nov 23 23:06:53.072909 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:33726.service - OpenSSH per-connection server daemon (10.0.0.1:33726). Nov 23 23:06:53.144049 sshd[5204]: Accepted publickey for core from 10.0.0.1 port 33726 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:06:53.145832 sshd-session[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:53.150676 systemd-logind[1486]: New session 16 of user core. Nov 23 23:06:53.166003 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 23:06:53.341017 sshd[5207]: Connection closed by 10.0.0.1 port 33726 Nov 23 23:06:53.342791 sshd-session[5204]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:53.349280 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:33726.service: Deactivated successfully. Nov 23 23:06:53.353585 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 23:06:53.354644 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Nov 23 23:06:53.359364 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:33736.service - OpenSSH per-connection server daemon (10.0.0.1:33736). Nov 23 23:06:53.360203 systemd-logind[1486]: Removed session 16. Nov 23 23:06:53.420431 sshd[5222]: Accepted publickey for core from 10.0.0.1 port 33736 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:06:53.423038 sshd-session[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:53.429888 systemd-logind[1486]: New session 17 of user core. Nov 23 23:06:53.437944 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 23:06:53.681594 sshd[5225]: Connection closed by 10.0.0.1 port 33736 Nov 23 23:06:53.681152 sshd-session[5222]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:53.691036 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:33736.service: Deactivated successfully. Nov 23 23:06:53.694399 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 23:06:53.695947 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Nov 23 23:06:53.703310 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:33744.service - OpenSSH per-connection server daemon (10.0.0.1:33744). Nov 23 23:06:53.703860 systemd-logind[1486]: Removed session 17. Nov 23 23:06:53.762338 sshd[5237]: Accepted publickey for core from 10.0.0.1 port 33744 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:06:53.763669 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:53.768649 systemd-logind[1486]: New session 18 of user core. Nov 23 23:06:53.777931 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 23:06:54.437238 sshd[5240]: Connection closed by 10.0.0.1 port 33744 Nov 23 23:06:54.437563 sshd-session[5237]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:54.454037 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:33744.service: Deactivated successfully. Nov 23 23:06:54.457671 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 23:06:54.460623 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Nov 23 23:06:54.466111 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:33748.service - OpenSSH per-connection server daemon (10.0.0.1:33748). Nov 23 23:06:54.470883 systemd-logind[1486]: Removed session 18. Nov 23 23:06:54.528345 sshd[5258]: Accepted publickey for core from 10.0.0.1 port 33748 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:06:54.530178 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:54.534949 systemd-logind[1486]: New session 19 of user core. Nov 23 23:06:54.542051 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 23:06:54.921078 sshd[5264]: Connection closed by 10.0.0.1 port 33748 Nov 23 23:06:54.921555 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:54.933265 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:33748.service: Deactivated successfully. Nov 23 23:06:54.935688 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 23:06:54.943176 kubelet[2643]: E1123 23:06:54.940902 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfgkh" podUID="a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16" Nov 23 23:06:54.943176 kubelet[2643]: E1123 23:06:54.940987 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f5b99886-phxmt" podUID="acc2e189-3c10-4cac-9d7e-5131c7f8c476" Nov 23 23:06:54.941287 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Nov 23 23:06:54.945924 kubelet[2643]: E1123 23:06:54.945855 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-6sfqj" podUID="954fb416-d77f-4b69-ac56-30f7ed8932d5" Nov 23 23:06:54.947572 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:33760.service - OpenSSH per-connection server daemon (10.0.0.1:33760). Nov 23 23:06:54.949664 systemd-logind[1486]: Removed session 19. Nov 23 23:06:55.014121 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 33760 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:06:55.015987 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:06:55.021803 systemd-logind[1486]: New session 20 of user core. Nov 23 23:06:55.025927 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 23 23:06:55.190894 sshd[5279]: Connection closed by 10.0.0.1 port 33760 Nov 23 23:06:55.190954 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Nov 23 23:06:55.196340 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:33760.service: Deactivated successfully. Nov 23 23:06:55.198272 systemd[1]: session-20.scope: Deactivated successfully. Nov 23 23:06:55.199293 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Nov 23 23:06:55.201131 systemd-logind[1486]: Removed session 20. Nov 23 23:06:58.937436 containerd[1501]: time="2025-11-23T23:06:58.937156490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:06:59.149852 containerd[1501]: time="2025-11-23T23:06:59.149696210Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:59.151098 containerd[1501]: time="2025-11-23T23:06:59.151055763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:06:59.151243 containerd[1501]: time="2025-11-23T23:06:59.151137805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:06:59.151322 kubelet[2643]: E1123 23:06:59.151277 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:06:59.152047 kubelet[2643]: E1123 23:06:59.151331 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:06:59.152047 kubelet[2643]: E1123 23:06:59.151478 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:305450c028ea43fa89592e750d7f9694,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4cd9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c5fcd4d6-qdm8f_calico-system(827591c9-2ca8-4c08-9a39-7dd17d688b03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:59.153481 containerd[1501]: time="2025-11-23T23:06:59.153380780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:06:59.363513 containerd[1501]: time="2025-11-23T23:06:59.363449607Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:06:59.364717 containerd[1501]: time="2025-11-23T23:06:59.364649556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:06:59.364812 containerd[1501]: time="2025-11-23T23:06:59.364798040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:06:59.365041 kubelet[2643]: E1123 23:06:59.364997 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:06:59.365095 kubelet[2643]: E1123 23:06:59.365056 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:06:59.365216 kubelet[2643]: E1123 23:06:59.365163 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cd9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84c5fcd4d6-qdm8f_calico-system(827591c9-2ca8-4c08-9a39-7dd17d688b03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:06:59.366597 kubelet[2643]: E1123 23:06:59.366536 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c5fcd4d6-qdm8f" podUID="827591c9-2ca8-4c08-9a39-7dd17d688b03" Nov 23 23:07:00.202703 systemd[1]: Started sshd@20-10.0.0.48:22-10.0.0.1:38286.service - OpenSSH per-connection server daemon (10.0.0.1:38286). Nov 23 23:07:00.255874 sshd[5295]: Accepted publickey for core from 10.0.0.1 port 38286 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:07:00.258530 sshd-session[5295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:07:00.262633 systemd-logind[1486]: New session 21 of user core. Nov 23 23:07:00.273004 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 23 23:07:00.420184 sshd[5298]: Connection closed by 10.0.0.1 port 38286 Nov 23 23:07:00.420740 sshd-session[5295]: pam_unix(sshd:session): session closed for user core Nov 23 23:07:00.425048 systemd[1]: sshd@20-10.0.0.48:22-10.0.0.1:38286.service: Deactivated successfully. Nov 23 23:07:00.428463 systemd[1]: session-21.scope: Deactivated successfully. Nov 23 23:07:00.429193 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Nov 23 23:07:00.430436 systemd-logind[1486]: Removed session 21. Nov 23 23:07:02.958527 containerd[1501]: time="2025-11-23T23:07:02.958218549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:07:03.157534 containerd[1501]: time="2025-11-23T23:07:03.157351941Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:07:03.158500 containerd[1501]: time="2025-11-23T23:07:03.158382285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:07:03.158500 containerd[1501]: time="2025-11-23T23:07:03.158458526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:07:03.158842 kubelet[2643]: E1123 23:07:03.158793 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:07:03.159250 kubelet[2643]: E1123 23:07:03.158864 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:07:03.159250 kubelet[2643]: E1123 23:07:03.159092 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhgc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69b55bbf5b-64w8f_calico-apiserver(119ea192-2966-40b8-aace-9d8b5df61791): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:07:03.159360 containerd[1501]: time="2025-11-23T23:07:03.159205903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:07:03.160671 kubelet[2643]: E1123 23:07:03.160610 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-64w8f" podUID="119ea192-2966-40b8-aace-9d8b5df61791" Nov 23 23:07:03.375945 containerd[1501]: time="2025-11-23T23:07:03.375894951Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:07:03.377470 containerd[1501]: time="2025-11-23T23:07:03.377371545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:07:03.377470 containerd[1501]: time="2025-11-23T23:07:03.377473867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:07:03.377941 kubelet[2643]: E1123 23:07:03.377887 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:07:03.378028 kubelet[2643]: E1123 23:07:03.377955 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:07:03.378865 kubelet[2643]: E1123 23:07:03.378802 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6mkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2c6t_calico-system(727c5b39-08b3-46f8-a2e4-48219c6016f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:07:03.381062 containerd[1501]: time="2025-11-23T23:07:03.381029387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:07:03.575204 containerd[1501]: time="2025-11-23T23:07:03.575104885Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:07:03.576232 containerd[1501]: time="2025-11-23T23:07:03.576192870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:07:03.576351 containerd[1501]: time="2025-11-23T23:07:03.576257471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:07:03.576448 kubelet[2643]: E1123 23:07:03.576402 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:07:03.576494 kubelet[2643]: E1123 23:07:03.576463 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:07:03.576617 kubelet[2643]: E1123 23:07:03.576581 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6mkm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-n2c6t_calico-system(727c5b39-08b3-46f8-a2e4-48219c6016f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:07:03.578057 kubelet[2643]: E1123 23:07:03.578018 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-n2c6t" podUID="727c5b39-08b3-46f8-a2e4-48219c6016f9" Nov 23 23:07:05.431459 systemd[1]: Started sshd@21-10.0.0.48:22-10.0.0.1:38294.service - OpenSSH per-connection server daemon (10.0.0.1:38294). Nov 23 23:07:05.522583 sshd[5312]: Accepted publickey for core from 10.0.0.1 port 38294 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:07:05.524005 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:07:05.528397 systemd-logind[1486]: New session 22 of user core. Nov 23 23:07:05.535949 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 23 23:07:05.666702 sshd[5315]: Connection closed by 10.0.0.1 port 38294 Nov 23 23:07:05.665668 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Nov 23 23:07:05.668739 systemd[1]: sshd@21-10.0.0.48:22-10.0.0.1:38294.service: Deactivated successfully. Nov 23 23:07:05.671140 systemd[1]: session-22.scope: Deactivated successfully. Nov 23 23:07:05.675221 systemd-logind[1486]: Session 22 logged out. Waiting for processes to exit. Nov 23 23:07:05.676844 systemd-logind[1486]: Removed session 22. Nov 23 23:07:05.939599 containerd[1501]: time="2025-11-23T23:07:05.938820211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:07:06.149116 containerd[1501]: time="2025-11-23T23:07:06.149060866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:07:06.150176 containerd[1501]: time="2025-11-23T23:07:06.150140969Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:07:06.150301 containerd[1501]: time="2025-11-23T23:07:06.150210211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:07:06.150774 kubelet[2643]: E1123 23:07:06.150450 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:07:06.151392 kubelet[2643]: E1123 23:07:06.151076 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:07:06.151603 kubelet[2643]: E1123 23:07:06.151369 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4987,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77f5b99886-phxmt_calico-system(acc2e189-3c10-4cac-9d7e-5131c7f8c476): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:07:06.152926 kubelet[2643]: E1123 23:07:06.152887 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77f5b99886-phxmt" podUID="acc2e189-3c10-4cac-9d7e-5131c7f8c476" Nov 23 23:07:07.937651 containerd[1501]: time="2025-11-23T23:07:07.937572501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:07:08.160608 containerd[1501]: time="2025-11-23T23:07:08.160524527Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:07:08.161819 containerd[1501]: time="2025-11-23T23:07:08.161685431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:07:08.161819 containerd[1501]: time="2025-11-23T23:07:08.161769832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:07:08.161976 kubelet[2643]: E1123 23:07:08.161930 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:07:08.162268 kubelet[2643]: E1123 23:07:08.161989 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:07:08.162268 kubelet[2643]: E1123 23:07:08.162191 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bnv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jfgkh_calico-system(a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:07:08.162459 containerd[1501]: time="2025-11-23T23:07:08.162389325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:07:08.163656 kubelet[2643]: E1123 23:07:08.163580 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jfgkh" podUID="a72dd4c7-49a5-4da9-ae9d-582c3d4cbc16" Nov 23 23:07:08.367875 containerd[1501]: time="2025-11-23T23:07:08.367828887Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:07:08.368809 containerd[1501]: time="2025-11-23T23:07:08.368768746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:07:08.368870 containerd[1501]: time="2025-11-23T23:07:08.368852868Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:07:08.369015 kubelet[2643]: E1123 23:07:08.368977 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:07:08.369015 kubelet[2643]: E1123 23:07:08.369030 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:07:08.369172 kubelet[2643]: E1123 23:07:08.369140 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jnqf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69b55bbf5b-6sfqj_calico-apiserver(954fb416-d77f-4b69-ac56-30f7ed8932d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:07:08.370776 kubelet[2643]: E1123 23:07:08.370723 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69b55bbf5b-6sfqj" podUID="954fb416-d77f-4b69-ac56-30f7ed8932d5" Nov 23 23:07:10.678644 systemd[1]: Started sshd@22-10.0.0.48:22-10.0.0.1:37362.service - OpenSSH per-connection server daemon (10.0.0.1:37362). Nov 23 23:07:10.740299 sshd[5338]: Accepted publickey for core from 10.0.0.1 port 37362 ssh2: RSA SHA256:yIy4UrzOMNNnnIqKwL8egez+/NjI/EpaMMlf9RYGR+A Nov 23 23:07:10.741675 sshd-session[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:07:10.745831 systemd-logind[1486]: New session 23 of user core. Nov 23 23:07:10.752976 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 23 23:07:10.888763 sshd[5341]: Connection closed by 10.0.0.1 port 37362 Nov 23 23:07:10.888247 sshd-session[5338]: pam_unix(sshd:session): session closed for user core Nov 23 23:07:10.891686 systemd[1]: sshd@22-10.0.0.48:22-10.0.0.1:37362.service: Deactivated successfully. Nov 23 23:07:10.893542 systemd[1]: session-23.scope: Deactivated successfully. Nov 23 23:07:10.894972 systemd-logind[1486]: Session 23 logged out. Waiting for processes to exit. Nov 23 23:07:10.896094 systemd-logind[1486]: Removed session 23. Nov 23 23:07:11.938383 kubelet[2643]: E1123 23:07:11.938296 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84c5fcd4d6-qdm8f" podUID="827591c9-2ca8-4c08-9a39-7dd17d688b03"