Dec 12 17:37:53.767670 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:37:53.767697 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 12 17:37:53.767708 kernel: KASLR enabled Dec 12 17:37:53.767722 kernel: efi: EFI v2.7 by EDK II Dec 12 17:37:53.767728 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 12 17:37:53.767734 kernel: random: crng init done Dec 12 17:37:53.767741 kernel: secureboot: Secure boot disabled Dec 12 17:37:53.767746 kernel: ACPI: Early table checksum verification disabled Dec 12 17:37:53.767752 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 12 17:37:53.767759 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:37:53.767766 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:53.767771 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:53.767777 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:53.767783 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:53.767790 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:53.767797 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:53.767803 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:53.767810 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:53.767816 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:53.767822 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:37:53.767828 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:37:53.767834 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:37:53.767840 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 12 17:37:53.767846 kernel: Zone ranges: Dec 12 17:37:53.767852 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:37:53.767859 kernel: DMA32 empty Dec 12 17:37:53.767865 kernel: Normal empty Dec 12 17:37:53.767871 kernel: Device empty Dec 12 17:37:53.767877 kernel: Movable zone start for each node Dec 12 17:37:53.767883 kernel: Early memory node ranges Dec 12 17:37:53.767889 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 12 17:37:53.767895 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 12 17:37:53.767901 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 12 17:37:53.767907 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 12 17:37:53.767913 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 12 17:37:53.767919 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 12 17:37:53.767925 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 12 17:37:53.767932 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 12 17:37:53.767938 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 12 17:37:53.767944 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 12 17:37:53.767953 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 12 17:37:53.767959 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 12 17:37:53.767966 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:37:53.767973 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:37:53.767980 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:37:53.767986 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 12 17:37:53.767992 kernel: psci: probing for conduit method from ACPI. Dec 12 17:37:53.767998 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:37:53.768005 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:37:53.768011 kernel: psci: Trusted OS migration not required Dec 12 17:37:53.768017 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:37:53.768024 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:37:53.768030 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:37:53.768038 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:37:53.768044 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:37:53.768051 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:37:53.768058 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:37:53.768080 kernel: CPU features: detected: Spectre-v4 Dec 12 17:37:53.768087 kernel: CPU features: detected: Spectre-BHB Dec 12 17:37:53.768093 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:37:53.768100 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:37:53.768106 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:37:53.768112 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:37:53.768119 kernel: alternatives: applying boot alternatives Dec 12 17:37:53.768127 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:37:53.768136 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:37:53.768143 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:37:53.768150 kernel: Fallback order for Node 0: 0 Dec 12 17:37:53.768156 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:37:53.768162 kernel: Policy zone: DMA Dec 12 17:37:53.768169 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:37:53.768175 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:37:53.768181 kernel: software IO TLB: area num 4. Dec 12 17:37:53.768188 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:37:53.768194 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 12 17:37:53.768201 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:37:53.768209 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:37:53.768231 kernel: rcu: RCU event tracing is enabled. Dec 12 17:37:53.768237 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:37:53.768244 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:37:53.768251 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:37:53.768257 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:37:53.768264 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:37:53.768270 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:37:53.768277 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:37:53.768284 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:37:53.768290 kernel: GICv3: 256 SPIs implemented Dec 12 17:37:53.768297 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:37:53.768304 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:37:53.768310 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:37:53.768317 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:37:53.768323 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:37:53.768329 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:37:53.768336 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:37:53.768342 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:37:53.768349 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:37:53.768355 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:37:53.768361 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:37:53.768368 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:37:53.768375 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:37:53.768382 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:37:53.768389 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:37:53.768395 kernel: arm-pv: using stolen time PV Dec 12 17:37:53.768402 kernel: Console: colour dummy device 80x25 Dec 12 17:37:53.768409 kernel: ACPI: Core revision 20240827 Dec 12 17:37:53.768415 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:37:53.768422 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:37:53.768429 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:37:53.768435 kernel: landlock: Up and running. Dec 12 17:37:53.768443 kernel: SELinux: Initializing. Dec 12 17:37:53.768449 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:37:53.768456 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:37:53.768463 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:37:53.768469 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:37:53.768476 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:37:53.768482 kernel: Remapping and enabling EFI services. Dec 12 17:37:53.768489 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:37:53.768496 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:37:53.768509 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:37:53.768516 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:37:53.768523 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:37:53.768531 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:37:53.768538 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:37:53.768545 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:37:53.768552 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:37:53.768559 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:37:53.768567 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:37:53.768574 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:37:53.768581 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:37:53.768588 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:37:53.768595 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:37:53.768601 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:37:53.768608 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:37:53.768615 kernel: SMP: Total of 4 processors activated. Dec 12 17:37:53.768622 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:37:53.768630 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:37:53.768637 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:37:53.768643 kernel: CPU features: detected: Common not Private translations Dec 12 17:37:53.768650 kernel: CPU features: detected: CRC32 instructions Dec 12 17:37:53.768657 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:37:53.768664 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:37:53.768671 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:37:53.768678 kernel: CPU features: detected: Privileged Access Never Dec 12 17:37:53.768685 kernel: CPU features: detected: RAS Extension Support Dec 12 17:37:53.768693 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:37:53.768700 kernel: alternatives: applying system-wide alternatives Dec 12 17:37:53.768706 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:37:53.768733 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Dec 12 17:37:53.768742 kernel: devtmpfs: initialized Dec 12 17:37:53.768749 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:37:53.768756 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:37:53.768763 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:37:53.768770 kernel: 0 pages in range for non-PLT usage Dec 12 17:37:53.768779 kernel: 508400 pages in range for PLT usage Dec 12 17:37:53.768786 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:37:53.768793 kernel: SMBIOS 3.0.0 present. Dec 12 17:37:53.768800 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:37:53.768807 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:37:53.768814 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:37:53.768821 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:37:53.768828 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:37:53.768835 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:37:53.768843 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:37:53.768850 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Dec 12 17:37:53.768857 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:37:53.768864 kernel: cpuidle: using governor menu Dec 12 17:37:53.768871 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:37:53.768877 kernel: ASID allocator initialised with 32768 entries Dec 12 17:37:53.768884 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:37:53.768891 kernel: Serial: AMBA PL011 UART driver Dec 12 17:37:53.768898 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:37:53.768907 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:37:53.768914 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:37:53.768921 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:37:53.768928 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:37:53.768935 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:37:53.768942 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:37:53.768948 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:37:53.768955 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:37:53.768962 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:37:53.768969 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:37:53.768978 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:37:53.768985 kernel: ACPI: Interpreter enabled Dec 12 17:37:53.768992 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:37:53.768999 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:37:53.769006 kernel: ACPI: CPU0 has been hot-added Dec 12 17:37:53.769013 kernel: ACPI: CPU1 has been hot-added Dec 12 17:37:53.769020 kernel: ACPI: CPU2 has been hot-added Dec 12 17:37:53.769027 kernel: ACPI: CPU3 has been hot-added Dec 12 17:37:53.769034 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:37:53.769042 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:37:53.769050 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:37:53.769210 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:37:53.769280 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:37:53.769342 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:37:53.769402 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:37:53.769462 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:37:53.769474 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:37:53.769481 kernel: PCI host bridge to bus 0000:00 Dec 12 17:37:53.769550 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:37:53.769607 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:37:53.769662 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:37:53.769727 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:37:53.769809 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:37:53.769893 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:37:53.769964 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:37:53.770033 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:37:53.770176 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:37:53.770249 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:37:53.770318 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:37:53.770385 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:37:53.770444 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:37:53.770501 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:37:53.770558 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:37:53.770568 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:37:53.770575 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:37:53.770582 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:37:53.770590 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:37:53.770599 kernel: iommu: Default domain type: Translated Dec 12 17:37:53.770607 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:37:53.770614 kernel: efivars: Registered efivars operations Dec 12 17:37:53.770621 kernel: vgaarb: loaded Dec 12 17:37:53.770629 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:37:53.770636 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:37:53.770643 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:37:53.770650 kernel: pnp: PnP ACPI init Dec 12 17:37:53.770734 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:37:53.770748 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:37:53.770756 kernel: NET: Registered PF_INET protocol family Dec 12 17:37:53.770763 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:37:53.770771 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:37:53.770778 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:37:53.770785 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:37:53.770793 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:37:53.770800 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:37:53.770807 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:37:53.770816 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:37:53.770823 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:37:53.770831 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:37:53.770838 kernel: kvm [1]: HYP mode not available Dec 12 17:37:53.770845 kernel: Initialise system trusted keyrings Dec 12 17:37:53.770852 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:37:53.770860 kernel: Key type asymmetric registered Dec 12 17:37:53.770867 kernel: Asymmetric key parser 'x509' registered Dec 12 17:37:53.770874 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:37:53.770883 kernel: io scheduler mq-deadline registered Dec 12 17:37:53.770890 kernel: io scheduler kyber registered Dec 12 17:37:53.770898 kernel: io scheduler bfq registered Dec 12 17:37:53.770905 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:37:53.770912 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:37:53.770920 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:37:53.770984 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:37:53.770994 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:37:53.771002 kernel: thunder_xcv, ver 1.0 Dec 12 17:37:53.771011 kernel: thunder_bgx, ver 1.0 Dec 12 17:37:53.771018 kernel: nicpf, ver 1.0 Dec 12 17:37:53.771025 kernel: nicvf, ver 1.0 Dec 12 17:37:53.771109 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:37:53.771174 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:37:53 UTC (1765561073) Dec 12 17:37:53.771184 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:37:53.771191 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:37:53.771198 kernel: watchdog: NMI not fully supported Dec 12 17:37:53.771208 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:37:53.771215 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:37:53.771223 kernel: Segment Routing with IPv6 Dec 12 17:37:53.771230 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:37:53.771237 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:37:53.771244 kernel: Key type dns_resolver registered Dec 12 17:37:53.771251 kernel: registered taskstats version 1 Dec 12 17:37:53.771258 kernel: Loading compiled-in X.509 certificates Dec 12 17:37:53.771266 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 12 17:37:53.771274 kernel: Demotion targets for Node 0: null Dec 12 17:37:53.771281 kernel: Key type .fscrypt registered Dec 12 17:37:53.771288 kernel: Key type fscrypt-provisioning registered Dec 12 17:37:53.771295 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:37:53.771303 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:37:53.771310 kernel: ima: No architecture policies found Dec 12 17:37:53.771317 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:37:53.771324 kernel: clk: Disabling unused clocks Dec 12 17:37:53.771331 kernel: PM: genpd: Disabling unused power domains Dec 12 17:37:53.771339 kernel: Warning: unable to open an initial console. Dec 12 17:37:53.771347 kernel: Freeing unused kernel memory: 39552K Dec 12 17:37:53.771354 kernel: Run /init as init process Dec 12 17:37:53.771361 kernel: with arguments: Dec 12 17:37:53.771368 kernel: /init Dec 12 17:37:53.771375 kernel: with environment: Dec 12 17:37:53.771381 kernel: HOME=/ Dec 12 17:37:53.771389 kernel: TERM=linux Dec 12 17:37:53.771397 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:37:53.771408 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:37:53.771417 systemd[1]: Detected virtualization kvm. Dec 12 17:37:53.771424 systemd[1]: Detected architecture arm64. Dec 12 17:37:53.771432 systemd[1]: Running in initrd. Dec 12 17:37:53.771439 systemd[1]: No hostname configured, using default hostname. Dec 12 17:37:53.771447 systemd[1]: Hostname set to . Dec 12 17:37:53.771454 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:37:53.771463 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:37:53.771471 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:37:53.771478 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:37:53.771486 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:37:53.771494 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:37:53.771502 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:37:53.771510 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:37:53.771520 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 17:37:53.771528 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 17:37:53.771536 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:37:53.771543 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:37:53.771551 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:37:53.771559 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:37:53.771566 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:37:53.771574 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:37:53.771583 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:37:53.771591 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:37:53.771599 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:37:53.771606 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:37:53.771614 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:37:53.771621 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:37:53.771629 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:37:53.771637 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:37:53.771644 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:37:53.771654 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:37:53.771662 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:37:53.771670 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:37:53.771678 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:37:53.771685 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:37:53.771693 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:37:53.771701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:37:53.771709 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:37:53.771728 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:37:53.771736 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:37:53.771744 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:37:53.771768 systemd-journald[244]: Collecting audit messages is disabled. Dec 12 17:37:53.771789 systemd-journald[244]: Journal started Dec 12 17:37:53.771807 systemd-journald[244]: Runtime Journal (/run/log/journal/ce131d43bc50401cb4710fed693bcfb3) is 6M, max 48.5M, 42.4M free. Dec 12 17:37:53.765014 systemd-modules-load[246]: Inserted module 'overlay' Dec 12 17:37:53.777380 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:37:53.780082 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:37:53.782233 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:37:53.782874 systemd-modules-load[246]: Inserted module 'br_netfilter' Dec 12 17:37:53.783815 kernel: Bridge firewalling registered Dec 12 17:37:53.783139 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:37:53.786306 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:37:53.789743 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:37:53.791512 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:37:53.793605 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:37:53.802742 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:37:53.809130 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:37:53.810989 systemd-tmpfiles[270]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:37:53.811573 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:37:53.813959 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:37:53.817536 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:37:53.831289 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:37:53.833558 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:37:53.857026 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:37:53.859651 systemd-resolved[281]: Positive Trust Anchors: Dec 12 17:37:53.859661 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:37:53.859691 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:37:53.864847 systemd-resolved[281]: Defaulting to hostname 'linux'. Dec 12 17:37:53.865894 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:37:53.870444 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:37:53.932098 kernel: SCSI subsystem initialized Dec 12 17:37:53.936103 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:37:53.944101 kernel: iscsi: registered transport (tcp) Dec 12 17:37:53.957087 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:37:53.957106 kernel: QLogic iSCSI HBA Driver Dec 12 17:37:53.974547 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:37:53.992119 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:37:53.993547 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:37:54.040791 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:37:54.043127 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:37:54.108105 kernel: raid6: neonx8 gen() 15562 MB/s Dec 12 17:37:54.125098 kernel: raid6: neonx4 gen() 15612 MB/s Dec 12 17:37:54.142097 kernel: raid6: neonx2 gen() 13198 MB/s Dec 12 17:37:54.159093 kernel: raid6: neonx1 gen() 10334 MB/s Dec 12 17:37:54.176094 kernel: raid6: int64x8 gen() 6818 MB/s Dec 12 17:37:54.193102 kernel: raid6: int64x4 gen() 7123 MB/s Dec 12 17:37:54.210100 kernel: raid6: int64x2 gen() 6042 MB/s Dec 12 17:37:54.227114 kernel: raid6: int64x1 gen() 5008 MB/s Dec 12 17:37:54.227172 kernel: raid6: using algorithm neonx4 gen() 15612 MB/s Dec 12 17:37:54.244105 kernel: raid6: .... xor() 12172 MB/s, rmw enabled Dec 12 17:37:54.244144 kernel: raid6: using neon recovery algorithm Dec 12 17:37:54.250336 kernel: xor: measuring software checksum speed Dec 12 17:37:54.250370 kernel: 8regs : 19660 MB/sec Dec 12 17:37:54.251093 kernel: 32regs : 21681 MB/sec Dec 12 17:37:54.251116 kernel: arm64_neon : 24455 MB/sec Dec 12 17:37:54.252132 kernel: xor: using function: arm64_neon (24455 MB/sec) Dec 12 17:37:54.305110 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:37:54.311569 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:37:54.314749 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:37:54.341384 systemd-udevd[500]: Using default interface naming scheme 'v255'. Dec 12 17:37:54.345588 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:37:54.347452 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:37:54.376517 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Dec 12 17:37:54.401537 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:37:54.403878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:37:54.456671 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:37:54.460683 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:37:54.525664 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:37:54.526463 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:37:54.529337 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 17:37:54.526574 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:37:54.531454 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:37:54.535644 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:37:54.535680 kernel: GPT:9289727 != 19775487 Dec 12 17:37:54.535690 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:37:54.535699 kernel: GPT:9289727 != 19775487 Dec 12 17:37:54.536463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:37:54.539006 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:37:54.539248 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:37:54.570914 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:37:54.572267 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:37:54.574767 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:37:54.583936 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:37:54.591880 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:37:54.593008 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 17:37:54.601675 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:37:54.607443 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:37:54.608586 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:37:54.610573 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:37:54.613112 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:37:54.614837 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:37:54.630574 disk-uuid[590]: Primary Header is updated. Dec 12 17:37:54.630574 disk-uuid[590]: Secondary Entries is updated. Dec 12 17:37:54.630574 disk-uuid[590]: Secondary Header is updated. Dec 12 17:37:54.635125 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:37:54.638953 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:37:55.648143 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:37:55.648831 disk-uuid[596]: The operation has completed successfully. Dec 12 17:37:55.675311 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:37:55.676789 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:37:55.709465 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 17:37:55.738439 sh[611]: Success Dec 12 17:37:55.751597 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:37:55.751652 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:37:55.752750 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:37:55.761124 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:37:55.788338 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:37:55.791565 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 17:37:55.808393 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 17:37:55.814086 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (623) Dec 12 17:37:55.814137 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 12 17:37:55.815094 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:37:55.820507 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:37:55.820555 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:37:55.821643 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 17:37:55.823089 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:37:55.824288 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:37:55.825213 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:37:55.828855 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:37:55.852794 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (652) Dec 12 17:37:55.852846 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:37:55.852862 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:37:55.857097 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:37:55.857143 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:37:55.862080 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:37:55.862655 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:37:55.865185 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:37:55.944680 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:37:55.948769 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:37:55.972794 ignition[695]: Ignition 2.22.0 Dec 12 17:37:55.972806 ignition[695]: Stage: fetch-offline Dec 12 17:37:55.972836 ignition[695]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:37:55.972844 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:37:55.972931 ignition[695]: parsed url from cmdline: "" Dec 12 17:37:55.972934 ignition[695]: no config URL provided Dec 12 17:37:55.972938 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:37:55.972945 ignition[695]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:37:55.972969 ignition[695]: op(1): [started] loading QEMU firmware config module Dec 12 17:37:55.972975 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:37:55.983142 ignition[695]: op(1): [finished] loading QEMU firmware config module Dec 12 17:37:55.997169 systemd-networkd[803]: lo: Link UP Dec 12 17:37:55.997181 systemd-networkd[803]: lo: Gained carrier Dec 12 17:37:55.997931 systemd-networkd[803]: Enumeration completed Dec 12 17:37:55.998290 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:37:55.998378 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:37:55.998382 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:37:55.999436 systemd-networkd[803]: eth0: Link UP Dec 12 17:37:55.999519 systemd-networkd[803]: eth0: Gained carrier Dec 12 17:37:55.999527 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:37:56.001256 systemd[1]: Reached target network.target - Network. Dec 12 17:37:56.010139 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:37:56.037819 ignition[695]: parsing config with SHA512: 255cffc604e382fa7e86a50878d43c2b4052f2af6cf44a73d5967335b15a587eb6a9a03ab193977a63edd21f5cf0bcc0cb7b0dbfa0c6322b414c6b7873ac990b Dec 12 17:37:56.044848 unknown[695]: fetched base config from "system" Dec 12 17:37:56.044869 unknown[695]: fetched user config from "qemu" Dec 12 17:37:56.045396 ignition[695]: fetch-offline: fetch-offline passed Dec 12 17:37:56.045587 ignition[695]: Ignition finished successfully Dec 12 17:37:56.047520 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:37:56.049175 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:37:56.050094 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:37:56.088265 ignition[810]: Ignition 2.22.0 Dec 12 17:37:56.088280 ignition[810]: Stage: kargs Dec 12 17:37:56.088421 ignition[810]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:37:56.088431 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:37:56.089243 ignition[810]: kargs: kargs passed Dec 12 17:37:56.092781 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:37:56.089293 ignition[810]: Ignition finished successfully Dec 12 17:37:56.095138 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:37:56.138333 ignition[818]: Ignition 2.22.0 Dec 12 17:37:56.138347 ignition[818]: Stage: disks Dec 12 17:37:56.138503 ignition[818]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:37:56.138512 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:37:56.139295 ignition[818]: disks: disks passed Dec 12 17:37:56.141472 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:37:56.139341 ignition[818]: Ignition finished successfully Dec 12 17:37:56.143301 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:37:56.144454 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:37:56.146097 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:37:56.147475 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:37:56.149103 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:37:56.151904 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:37:56.189270 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 17:37:56.194900 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:37:56.198547 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:37:56.270109 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 12 17:37:56.270247 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:37:56.271370 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:37:56.274157 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:37:56.276410 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:37:56.278096 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:37:56.279165 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:37:56.279191 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:37:56.288857 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:37:56.291220 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:37:56.295089 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Dec 12 17:37:56.295135 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:37:56.296812 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:37:56.300328 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:37:56.300386 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:37:56.301540 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:37:56.332363 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:37:56.336394 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:37:56.340040 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:37:56.344392 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:37:56.416907 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:37:56.418850 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:37:56.420338 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:37:56.443104 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:37:56.456197 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:37:56.472318 ignition[949]: INFO : Ignition 2.22.0 Dec 12 17:37:56.472318 ignition[949]: INFO : Stage: mount Dec 12 17:37:56.473734 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:37:56.473734 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:37:56.473734 ignition[949]: INFO : mount: mount passed Dec 12 17:37:56.473734 ignition[949]: INFO : Ignition finished successfully Dec 12 17:37:56.475606 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:37:56.477757 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:37:56.813047 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:37:56.814530 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:37:56.847082 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (963) Dec 12 17:37:56.849509 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:37:56.849530 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:37:56.853591 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:37:56.853633 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:37:56.855177 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:37:56.889500 ignition[980]: INFO : Ignition 2.22.0 Dec 12 17:37:56.889500 ignition[980]: INFO : Stage: files Dec 12 17:37:56.891096 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:37:56.891096 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:37:56.891096 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:37:56.894156 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:37:56.894156 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:37:56.894156 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:37:56.894156 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:37:56.894156 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:37:56.893378 unknown[980]: wrote ssh authorized keys file for user: core Dec 12 17:37:56.900716 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:37:56.900716 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:37:56.941751 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:37:57.107475 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:37:57.107475 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:37:57.110846 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:37:57.110846 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:37:57.110846 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:37:57.110846 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:37:57.110846 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:37:57.110846 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:37:57.110846 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:37:57.121569 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:37:57.121569 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:37:57.121569 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:37:57.121569 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:37:57.121569 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:37:57.121569 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 12 17:37:57.370239 systemd-networkd[803]: eth0: Gained IPv6LL Dec 12 17:37:57.465803 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:37:57.736529 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:37:57.736529 ignition[980]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:37:57.741931 ignition[980]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:37:57.745614 ignition[980]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:37:57.745614 ignition[980]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:37:57.745614 ignition[980]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 17:37:57.745614 ignition[980]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:37:57.745614 ignition[980]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:37:57.745614 ignition[980]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 17:37:57.745614 ignition[980]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:37:57.761643 ignition[980]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:37:57.763657 ignition[980]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:37:57.763657 ignition[980]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:37:57.763657 ignition[980]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:37:57.763657 ignition[980]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:37:57.763657 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:37:57.763657 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:37:57.763657 ignition[980]: INFO : files: files passed Dec 12 17:37:57.763657 ignition[980]: INFO : Ignition finished successfully Dec 12 17:37:57.764997 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:37:57.767414 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:37:57.769767 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:37:57.782105 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:37:57.783105 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:37:57.787649 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:37:57.790418 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:37:57.790418 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:37:57.794215 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:37:57.794315 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:37:57.795753 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:37:57.797511 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:37:57.832957 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:37:57.833096 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:37:57.834824 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:37:57.836987 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:37:57.839440 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:37:57.840503 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:37:57.863205 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:37:57.866921 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:37:57.902281 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:37:57.903348 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:37:57.905332 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:37:57.906900 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:37:57.907041 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:37:57.909755 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:37:57.911745 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:37:57.913512 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:37:57.915284 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:37:57.917122 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:37:57.918967 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:37:57.920950 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:37:57.922671 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:37:57.924668 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:37:57.926528 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:37:57.928127 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:37:57.929826 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:37:57.929962 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:37:57.932591 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:37:57.934475 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:37:57.936394 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:37:57.936506 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:37:57.938270 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:37:57.938399 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:37:57.941143 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:37:57.941270 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:37:57.944941 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:37:57.946395 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:37:57.947184 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:37:57.948295 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:37:57.950952 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:37:57.953672 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:37:57.953767 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:37:57.955340 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:37:57.955415 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:37:57.957371 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:37:57.957490 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:37:57.959111 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:37:57.959221 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:37:57.961752 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:37:57.963236 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:37:57.963360 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:37:57.965786 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:37:57.967267 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:37:57.967392 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:37:57.969325 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:37:57.969423 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:37:57.974528 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:37:57.984289 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:37:57.993437 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:37:58.009285 ignition[1036]: INFO : Ignition 2.22.0 Dec 12 17:37:58.009285 ignition[1036]: INFO : Stage: umount Dec 12 17:37:58.011044 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:37:58.011044 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:37:58.013978 ignition[1036]: INFO : umount: umount passed Dec 12 17:37:58.013978 ignition[1036]: INFO : Ignition finished successfully Dec 12 17:37:58.013812 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:37:58.013919 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:37:58.015093 systemd[1]: Stopped target network.target - Network. Dec 12 17:37:58.016323 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:37:58.016383 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:37:58.017893 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:37:58.017935 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:37:58.019502 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:37:58.019549 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:37:58.020976 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:37:58.021014 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:37:58.022683 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:37:58.024293 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:37:58.032718 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:37:58.032879 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:37:58.042013 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 17:37:58.042414 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:37:58.042515 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:37:58.049670 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 17:37:58.049965 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:37:58.051107 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:37:58.053670 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:37:58.054835 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:37:58.054876 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:37:58.056410 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:37:58.056467 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:37:58.058784 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:37:58.059566 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:37:58.059623 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:37:58.061388 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:37:58.061431 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:37:58.063885 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:37:58.063929 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:37:58.065790 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:37:58.065834 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:37:58.068391 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:37:58.072522 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:37:58.072590 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:37:58.087901 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:37:58.088026 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:37:58.091773 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:37:58.092646 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:37:58.093948 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:37:58.093994 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:37:58.095668 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:37:58.095709 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:37:58.097383 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:37:58.097440 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:37:58.099712 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:37:58.099761 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:37:58.102112 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:37:58.102176 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:37:58.105521 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:37:58.106445 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:37:58.106515 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:37:58.109319 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:37:58.109365 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:37:58.111987 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:37:58.112029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:37:58.116040 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 17:37:58.116113 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 17:37:58.116147 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:37:58.122543 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:37:58.122641 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:37:58.124739 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:37:58.127199 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:37:58.160911 systemd[1]: Switching root. Dec 12 17:37:58.206437 systemd-journald[244]: Journal stopped Dec 12 17:37:59.015115 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Dec 12 17:37:59.015181 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:37:59.015197 kernel: SELinux: policy capability open_perms=1 Dec 12 17:37:59.015211 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:37:59.015222 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:37:59.015233 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:37:59.015243 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:37:59.015252 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:37:59.015261 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:37:59.015271 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:37:59.015284 kernel: audit: type=1403 audit(1765561078.426:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:37:59.015300 systemd[1]: Successfully loaded SELinux policy in 67.132ms. Dec 12 17:37:59.015317 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.649ms. Dec 12 17:37:59.015329 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:37:59.015341 systemd[1]: Detected virtualization kvm. Dec 12 17:37:59.015351 systemd[1]: Detected architecture arm64. Dec 12 17:37:59.015361 systemd[1]: Detected first boot. Dec 12 17:37:59.015372 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:37:59.015383 zram_generator::config[1086]: No configuration found. Dec 12 17:37:59.015396 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:37:59.015406 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:37:59.015421 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 17:37:59.015431 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:37:59.015441 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:37:59.015451 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:37:59.015462 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:37:59.015473 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:37:59.015484 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:37:59.015496 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:37:59.015507 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:37:59.015517 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:37:59.015528 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:37:59.015539 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:37:59.015549 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:37:59.015563 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:37:59.015574 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:37:59.015584 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:37:59.015597 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:37:59.015608 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:37:59.015619 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:37:59.015630 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:37:59.015641 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:37:59.015652 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:37:59.015664 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:37:59.015675 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:37:59.015685 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:37:59.015707 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:37:59.015720 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:37:59.015731 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:37:59.015741 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:37:59.015751 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:37:59.015762 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:37:59.015772 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:37:59.015784 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:37:59.015794 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:37:59.015804 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:37:59.015817 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:37:59.015828 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:37:59.015839 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:37:59.015849 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:37:59.015860 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:37:59.015870 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:37:59.015880 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:37:59.015892 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:37:59.015903 systemd[1]: Reached target machines.target - Containers. Dec 12 17:37:59.015913 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:37:59.015923 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:37:59.015934 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:37:59.015944 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:37:59.015954 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:37:59.015965 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:37:59.015977 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:37:59.015987 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:37:59.015997 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:37:59.016008 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:37:59.016018 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:37:59.016029 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:37:59.016039 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:37:59.016050 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:37:59.016716 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:37:59.016768 kernel: loop: module loaded Dec 12 17:37:59.016783 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:37:59.016794 kernel: fuse: init (API version 7.41) Dec 12 17:37:59.016804 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:37:59.016815 kernel: ACPI: bus type drm_connector registered Dec 12 17:37:59.016825 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:37:59.016836 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:37:59.016847 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:37:59.016863 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:37:59.016874 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 17:37:59.016885 systemd[1]: Stopped verity-setup.service. Dec 12 17:37:59.016896 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:37:59.016942 systemd-journald[1143]: Collecting audit messages is disabled. Dec 12 17:37:59.016969 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:37:59.016980 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:37:59.016992 systemd-journald[1143]: Journal started Dec 12 17:37:59.017026 systemd-journald[1143]: Runtime Journal (/run/log/journal/ce131d43bc50401cb4710fed693bcfb3) is 6M, max 48.5M, 42.4M free. Dec 12 17:37:58.809191 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:37:58.834179 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:37:58.834580 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:37:59.020452 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:37:59.021323 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:37:59.022405 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:37:59.023682 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:37:59.025883 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:37:59.027276 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:37:59.028531 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:37:59.028708 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:37:59.029978 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:37:59.030261 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:37:59.032520 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:37:59.032748 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:37:59.033904 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:37:59.034059 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:37:59.035310 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:37:59.035470 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:37:59.036783 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:37:59.036947 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:37:59.038334 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:37:59.039616 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:37:59.041020 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:37:59.042526 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:37:59.054411 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:37:59.056560 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:37:59.058510 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:37:59.059508 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:37:59.059543 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:37:59.061296 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:37:59.067970 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:37:59.070316 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:37:59.071249 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:37:59.073994 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:37:59.075217 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:37:59.078236 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:37:59.079308 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:37:59.082225 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:37:59.082748 systemd-journald[1143]: Time spent on flushing to /var/log/journal/ce131d43bc50401cb4710fed693bcfb3 is 13.769ms for 884 entries. Dec 12 17:37:59.082748 systemd-journald[1143]: System Journal (/var/log/journal/ce131d43bc50401cb4710fed693bcfb3) is 8M, max 195.6M, 187.6M free. Dec 12 17:37:59.102271 systemd-journald[1143]: Received client request to flush runtime journal. Dec 12 17:37:59.097058 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:37:59.099155 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:37:59.103762 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:37:59.108128 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:37:59.109621 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:37:59.113263 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:37:59.114863 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:37:59.117521 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:37:59.120370 kernel: loop0: detected capacity change from 0 to 211168 Dec 12 17:37:59.122760 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:37:59.126395 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:37:59.133086 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:37:59.145258 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:37:59.147557 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:37:59.156097 kernel: loop1: detected capacity change from 0 to 100632 Dec 12 17:37:59.161220 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:37:59.177128 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Dec 12 17:37:59.177148 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Dec 12 17:37:59.180508 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:37:59.199088 kernel: loop2: detected capacity change from 0 to 119840 Dec 12 17:37:59.236102 kernel: loop3: detected capacity change from 0 to 211168 Dec 12 17:37:59.242090 kernel: loop4: detected capacity change from 0 to 100632 Dec 12 17:37:59.247082 kernel: loop5: detected capacity change from 0 to 119840 Dec 12 17:37:59.253882 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 17:37:59.254262 (sd-merge)[1221]: Merged extensions into '/usr'. Dec 12 17:37:59.259889 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:37:59.259906 systemd[1]: Reloading... Dec 12 17:37:59.322122 zram_generator::config[1244]: No configuration found. Dec 12 17:37:59.377628 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:37:59.458841 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:37:59.459282 systemd[1]: Reloading finished in 199 ms. Dec 12 17:37:59.478757 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:37:59.480809 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:37:59.492274 systemd[1]: Starting ensure-sysext.service... Dec 12 17:37:59.493945 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:37:59.503122 systemd[1]: Reload requested from client PID 1281 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:37:59.503141 systemd[1]: Reloading... Dec 12 17:37:59.507240 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:37:59.507270 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:37:59.507508 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:37:59.507710 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 17:37:59.508364 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 17:37:59.508565 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Dec 12 17:37:59.508612 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Dec 12 17:37:59.511513 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:37:59.511526 systemd-tmpfiles[1282]: Skipping /boot Dec 12 17:37:59.517920 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:37:59.517935 systemd-tmpfiles[1282]: Skipping /boot Dec 12 17:37:59.539091 zram_generator::config[1312]: No configuration found. Dec 12 17:37:59.669675 systemd[1]: Reloading finished in 166 ms. Dec 12 17:37:59.678528 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:37:59.684051 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:37:59.696895 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:37:59.699198 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:37:59.701205 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:37:59.704016 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:37:59.706655 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:37:59.710814 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:37:59.715909 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:37:59.720096 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:37:59.722347 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:37:59.726336 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:37:59.729297 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:37:59.731281 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:37:59.731398 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:37:59.732391 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:37:59.735509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:37:59.735702 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:37:59.742145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:37:59.745402 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:37:59.747020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:37:59.747194 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:37:59.749385 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:37:59.752542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:37:59.752747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:37:59.755864 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:37:59.756073 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:37:59.756707 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Dec 12 17:37:59.759105 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:37:59.760926 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:37:59.761076 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:37:59.766099 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:37:59.771348 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:37:59.773113 augenrules[1382]: No rules Dec 12 17:37:59.775310 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:37:59.775516 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:37:59.776973 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:37:59.781444 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:37:59.785644 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:37:59.788393 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:37:59.795711 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:37:59.808252 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:37:59.817242 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:37:59.818579 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:37:59.818628 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:37:59.821818 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:37:59.823578 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:37:59.825986 systemd[1]: Finished ensure-sysext.service. Dec 12 17:37:59.827621 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:37:59.827790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:37:59.841430 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:37:59.843221 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:37:59.844569 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:37:59.847128 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:37:59.867285 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:37:59.871626 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:37:59.871828 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:37:59.879177 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:37:59.880562 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:37:59.880631 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:37:59.883135 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:37:59.897419 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:37:59.903981 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:37:59.934731 systemd-networkd[1426]: lo: Link UP Dec 12 17:37:59.934748 systemd-networkd[1426]: lo: Gained carrier Dec 12 17:37:59.935853 systemd-networkd[1426]: Enumeration completed Dec 12 17:37:59.935994 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:37:59.936338 systemd-networkd[1426]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:37:59.936349 systemd-networkd[1426]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:37:59.936900 systemd-networkd[1426]: eth0: Link UP Dec 12 17:37:59.937011 systemd-networkd[1426]: eth0: Gained carrier Dec 12 17:37:59.937030 systemd-networkd[1426]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:37:59.938667 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:37:59.942211 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:37:59.945203 systemd-resolved[1348]: Positive Trust Anchors: Dec 12 17:37:59.945220 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:37:59.945252 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:37:59.946754 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:37:59.947831 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:37:59.951147 systemd-networkd[1426]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:37:59.951894 systemd-timesyncd[1442]: Network configuration changed, trying to establish connection. Dec 12 17:37:59.954511 systemd-timesyncd[1442]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:37:59.954583 systemd-timesyncd[1442]: Initial clock synchronization to Fri 2025-12-12 17:38:00.053073 UTC. Dec 12 17:37:59.954865 systemd-resolved[1348]: Defaulting to hostname 'linux'. Dec 12 17:37:59.956236 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:37:59.957460 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:37:59.959039 systemd[1]: Reached target network.target - Network. Dec 12 17:37:59.961819 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:37:59.962860 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:37:59.963875 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:37:59.964953 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:37:59.966197 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:37:59.967548 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:37:59.968747 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:37:59.970448 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:37:59.970487 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:37:59.971469 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:37:59.973990 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:37:59.977167 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:37:59.979817 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:37:59.981205 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:37:59.982354 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:37:59.992743 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:37:59.995434 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:37:59.997074 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:37:59.998229 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:38:00.000126 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:38:00.000938 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:38:00.000967 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:38:00.003202 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:38:00.006115 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:38:00.010000 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:38:00.012237 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:38:00.014168 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:38:00.014994 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:38:00.021277 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:38:00.025275 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:38:00.025773 jq[1469]: false Dec 12 17:38:00.028280 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:38:00.033270 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:38:00.036378 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:38:00.038233 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:38:00.038611 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:38:00.039052 extend-filesystems[1470]: Found /dev/vda6 Dec 12 17:38:00.042272 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:38:00.044658 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:38:00.051109 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:38:00.052346 extend-filesystems[1470]: Found /dev/vda9 Dec 12 17:38:00.052801 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:38:00.053865 extend-filesystems[1470]: Checking size of /dev/vda9 Dec 12 17:38:00.054335 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:38:00.057129 jq[1488]: true Dec 12 17:38:00.054632 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:38:00.054784 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:38:00.056837 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:38:00.057064 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:38:00.068510 jq[1494]: true Dec 12 17:38:00.074187 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 17:38:00.082683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:38:00.088126 tar[1493]: linux-arm64/LICENSE Dec 12 17:38:00.088126 tar[1493]: linux-arm64/helm Dec 12 17:38:00.089837 extend-filesystems[1470]: Resized partition /dev/vda9 Dec 12 17:38:00.095188 extend-filesystems[1512]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:38:00.102200 dbus-daemon[1467]: [system] SELinux support is enabled Dec 12 17:38:00.102355 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:38:00.108142 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 17:38:00.107241 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:38:00.107265 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:38:00.109312 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:38:00.109337 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:38:00.140117 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:38:00.144234 update_engine[1481]: I20251212 17:38:00.143255 1481 main.cc:92] Flatcar Update Engine starting Dec 12 17:38:00.145192 systemd-logind[1479]: New seat seat0. Dec 12 17:38:00.147630 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:38:00.149299 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:38:00.151383 update_engine[1481]: I20251212 17:38:00.151235 1481 update_check_scheduler.cc:74] Next update check in 2m11s Dec 12 17:38:00.154090 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 17:38:00.154398 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:38:00.168077 extend-filesystems[1512]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:38:00.168077 extend-filesystems[1512]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:38:00.168077 extend-filesystems[1512]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 17:38:00.174413 extend-filesystems[1470]: Resized filesystem in /dev/vda9 Dec 12 17:38:00.173298 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:38:00.182160 bash[1529]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:38:00.180508 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:38:00.182131 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:38:00.183693 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:38:00.188802 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:38:00.206873 locksmithd[1531]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:38:00.268281 containerd[1496]: time="2025-12-12T17:38:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:38:00.270329 containerd[1496]: time="2025-12-12T17:38:00.270290098Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 17:38:00.288119 containerd[1496]: time="2025-12-12T17:38:00.287235339Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.609µs" Dec 12 17:38:00.288119 containerd[1496]: time="2025-12-12T17:38:00.287276844Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:38:00.288119 containerd[1496]: time="2025-12-12T17:38:00.287294378Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:38:00.288119 containerd[1496]: time="2025-12-12T17:38:00.287453111Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:38:00.288119 containerd[1496]: time="2025-12-12T17:38:00.287468701Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:38:00.288119 containerd[1496]: time="2025-12-12T17:38:00.287491782Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:38:00.288119 containerd[1496]: time="2025-12-12T17:38:00.287540211Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:38:00.288119 containerd[1496]: time="2025-12-12T17:38:00.287552319Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:38:00.288119 containerd[1496]: time="2025-12-12T17:38:00.287779242Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:38:00.288119 containerd[1496]: time="2025-12-12T17:38:00.287801757Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:38:00.288119 containerd[1496]: time="2025-12-12T17:38:00.287812771Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:38:00.288119 containerd[1496]: time="2025-12-12T17:38:00.287820788Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:38:00.288397 containerd[1496]: time="2025-12-12T17:38:00.287893109Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:38:00.288470 containerd[1496]: time="2025-12-12T17:38:00.288445881Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:38:00.288552 containerd[1496]: time="2025-12-12T17:38:00.288537152Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:38:00.288601 containerd[1496]: time="2025-12-12T17:38:00.288588133Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:38:00.288685 containerd[1496]: time="2025-12-12T17:38:00.288670334Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:38:00.289011 containerd[1496]: time="2025-12-12T17:38:00.288990149Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:38:00.289161 containerd[1496]: time="2025-12-12T17:38:00.289142322Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:38:00.294386 containerd[1496]: time="2025-12-12T17:38:00.294354922Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:38:00.294513 containerd[1496]: time="2025-12-12T17:38:00.294499604Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:38:00.294659 containerd[1496]: time="2025-12-12T17:38:00.294642504Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:38:00.294722 containerd[1496]: time="2025-12-12T17:38:00.294709115Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:38:00.294776 containerd[1496]: time="2025-12-12T17:38:00.294762283Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:38:00.294837 containerd[1496]: time="2025-12-12T17:38:00.294823265Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:38:00.294902 containerd[1496]: time="2025-12-12T17:38:00.294888783Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:38:00.294956 containerd[1496]: time="2025-12-12T17:38:00.294942760Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:38:00.295016 containerd[1496]: time="2025-12-12T17:38:00.295002042Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:38:00.295093 containerd[1496]: time="2025-12-12T17:38:00.295055817Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:38:00.295168 containerd[1496]: time="2025-12-12T17:38:00.295152596Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:38:00.295224 containerd[1496]: time="2025-12-12T17:38:00.295212404Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:38:00.295431 containerd[1496]: time="2025-12-12T17:38:00.295405475Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:38:00.295504 containerd[1496]: time="2025-12-12T17:38:00.295489539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:38:00.295561 containerd[1496]: time="2025-12-12T17:38:00.295546715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:38:00.295629 containerd[1496]: time="2025-12-12T17:38:00.295615027Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:38:00.295684 containerd[1496]: time="2025-12-12T17:38:00.295671070Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:38:00.295738 containerd[1496]: time="2025-12-12T17:38:00.295724318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:38:00.295805 containerd[1496]: time="2025-12-12T17:38:00.295790646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:38:00.295868 containerd[1496]: time="2025-12-12T17:38:00.295853329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:38:00.295920 containerd[1496]: time="2025-12-12T17:38:00.295907955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:38:00.295972 containerd[1496]: time="2025-12-12T17:38:00.295959462Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:38:00.296033 containerd[1496]: time="2025-12-12T17:38:00.296018946Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:38:00.296286 containerd[1496]: time="2025-12-12T17:38:00.296267331Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:38:00.296358 containerd[1496]: time="2025-12-12T17:38:00.296344956Z" level=info msg="Start snapshots syncer" Dec 12 17:38:00.296438 containerd[1496]: time="2025-12-12T17:38:00.296424161Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:38:00.296910 containerd[1496]: time="2025-12-12T17:38:00.296867520Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:38:00.297094 containerd[1496]: time="2025-12-12T17:38:00.297056866Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:38:00.297279 containerd[1496]: time="2025-12-12T17:38:00.297219325Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:38:00.297611 containerd[1496]: time="2025-12-12T17:38:00.297589189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:38:00.297691 containerd[1496]: time="2025-12-12T17:38:00.297674953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:38:00.297745 containerd[1496]: time="2025-12-12T17:38:00.297730793Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:38:00.297816 containerd[1496]: time="2025-12-12T17:38:00.297800725Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:38:00.297874 containerd[1496]: time="2025-12-12T17:38:00.297860979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:38:00.297943 containerd[1496]: time="2025-12-12T17:38:00.297929655Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:38:00.298011 containerd[1496]: time="2025-12-12T17:38:00.297998493Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:38:00.298109 containerd[1496]: time="2025-12-12T17:38:00.298064781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:38:00.298181 containerd[1496]: time="2025-12-12T17:38:00.298166459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:38:00.298238 containerd[1496]: time="2025-12-12T17:38:00.298223352Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:38:00.298377 containerd[1496]: time="2025-12-12T17:38:00.298334627Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:38:00.298486 containerd[1496]: time="2025-12-12T17:38:00.298467809Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:38:00.298559 containerd[1496]: time="2025-12-12T17:38:00.298544665Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:38:00.298611 containerd[1496]: time="2025-12-12T17:38:00.298598278Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:38:00.298656 containerd[1496]: time="2025-12-12T17:38:00.298644035Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:38:00.298705 containerd[1496]: time="2025-12-12T17:38:00.298692708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:38:00.298754 containerd[1496]: time="2025-12-12T17:38:00.298742190Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:38:00.298872 containerd[1496]: time="2025-12-12T17:38:00.298860430Z" level=info msg="runtime interface created" Dec 12 17:38:00.298923 containerd[1496]: time="2025-12-12T17:38:00.298911735Z" level=info msg="created NRI interface" Dec 12 17:38:00.298971 containerd[1496]: time="2025-12-12T17:38:00.298958545Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:38:00.299036 containerd[1496]: time="2025-12-12T17:38:00.299022889Z" level=info msg="Connect containerd service" Dec 12 17:38:00.299124 containerd[1496]: time="2025-12-12T17:38:00.299110030Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:38:00.300603 containerd[1496]: time="2025-12-12T17:38:00.300096280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:38:00.373433 containerd[1496]: time="2025-12-12T17:38:00.373309762Z" level=info msg="Start subscribing containerd event" Dec 12 17:38:00.373433 containerd[1496]: time="2025-12-12T17:38:00.373409861Z" level=info msg="Start recovering state" Dec 12 17:38:00.373576 containerd[1496]: time="2025-12-12T17:38:00.373498986Z" level=info msg="Start event monitor" Dec 12 17:38:00.373576 containerd[1496]: time="2025-12-12T17:38:00.373516844Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:38:00.373576 containerd[1496]: time="2025-12-12T17:38:00.373525388Z" level=info msg="Start streaming server" Dec 12 17:38:00.373576 containerd[1496]: time="2025-12-12T17:38:00.373534418Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:38:00.373576 containerd[1496]: time="2025-12-12T17:38:00.373541585Z" level=info msg="runtime interface starting up..." Dec 12 17:38:00.373576 containerd[1496]: time="2025-12-12T17:38:00.373546566Z" level=info msg="starting plugins..." Dec 12 17:38:00.373576 containerd[1496]: time="2025-12-12T17:38:00.373560171Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:38:00.373880 containerd[1496]: time="2025-12-12T17:38:00.373855812Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:38:00.374013 containerd[1496]: time="2025-12-12T17:38:00.373996647Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:38:00.379877 containerd[1496]: time="2025-12-12T17:38:00.379845069Z" level=info msg="containerd successfully booted in 0.112058s" Dec 12 17:38:00.379940 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:38:00.457291 tar[1493]: linux-arm64/README.md Dec 12 17:38:00.477315 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:38:00.715738 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:38:00.736099 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:38:00.738762 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:38:00.765864 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:38:00.766122 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:38:00.768594 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:38:00.785456 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:38:00.788163 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:38:00.790160 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:38:00.791298 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:38:01.466373 systemd-networkd[1426]: eth0: Gained IPv6LL Dec 12 17:38:01.468686 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:38:01.471308 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:38:01.474311 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:38:01.476891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:38:01.489378 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:38:01.504024 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:38:01.505244 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:38:01.506680 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:38:01.511198 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:38:02.089247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:02.090579 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:38:02.093987 (kubelet)[1604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:38:02.096166 systemd[1]: Startup finished in 2.034s (kernel) + 4.808s (initrd) + 3.737s (userspace) = 10.581s. Dec 12 17:38:02.453031 kubelet[1604]: E1212 17:38:02.452891 1604 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:38:02.455621 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:38:02.455773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:38:02.456155 systemd[1]: kubelet.service: Consumed 757ms CPU time, 258.8M memory peak. Dec 12 17:38:06.947731 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:38:06.951246 systemd[1]: Started sshd@0-10.0.0.95:22-10.0.0.1:38622.service - OpenSSH per-connection server daemon (10.0.0.1:38622). Dec 12 17:38:07.035295 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 38622 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:38:07.037282 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:38:07.045675 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:38:07.048191 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:38:07.059790 systemd-logind[1479]: New session 1 of user core. Dec 12 17:38:07.071136 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:38:07.073983 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:38:07.093386 (systemd)[1624]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:38:07.095857 systemd-logind[1479]: New session c1 of user core. Dec 12 17:38:07.211492 systemd[1624]: Queued start job for default target default.target. Dec 12 17:38:07.233979 systemd[1624]: Created slice app.slice - User Application Slice. Dec 12 17:38:07.234007 systemd[1624]: Reached target paths.target - Paths. Dec 12 17:38:07.234047 systemd[1624]: Reached target timers.target - Timers. Dec 12 17:38:07.235229 systemd[1624]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:38:07.245443 systemd[1624]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:38:07.245553 systemd[1624]: Reached target sockets.target - Sockets. Dec 12 17:38:07.245589 systemd[1624]: Reached target basic.target - Basic System. Dec 12 17:38:07.245624 systemd[1624]: Reached target default.target - Main User Target. Dec 12 17:38:07.245651 systemd[1624]: Startup finished in 143ms. Dec 12 17:38:07.245992 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:38:07.249210 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:38:07.327323 systemd[1]: Started sshd@1-10.0.0.95:22-10.0.0.1:38626.service - OpenSSH per-connection server daemon (10.0.0.1:38626). Dec 12 17:38:07.388793 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 38626 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:38:07.390156 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:38:07.395439 systemd-logind[1479]: New session 2 of user core. Dec 12 17:38:07.407258 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:38:07.464611 sshd[1639]: Connection closed by 10.0.0.1 port 38626 Dec 12 17:38:07.465097 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Dec 12 17:38:07.476991 systemd[1]: sshd@1-10.0.0.95:22-10.0.0.1:38626.service: Deactivated successfully. Dec 12 17:38:07.479856 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:38:07.483875 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:38:07.486283 systemd[1]: Started sshd@2-10.0.0.95:22-10.0.0.1:38640.service - OpenSSH per-connection server daemon (10.0.0.1:38640). Dec 12 17:38:07.488788 systemd-logind[1479]: Removed session 2. Dec 12 17:38:07.540107 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 38640 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:38:07.541489 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:38:07.546234 systemd-logind[1479]: New session 3 of user core. Dec 12 17:38:07.554250 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:38:07.603442 sshd[1648]: Connection closed by 10.0.0.1 port 38640 Dec 12 17:38:07.603824 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Dec 12 17:38:07.622969 systemd[1]: sshd@2-10.0.0.95:22-10.0.0.1:38640.service: Deactivated successfully. Dec 12 17:38:07.625854 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:38:07.627043 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:38:07.630381 systemd[1]: Started sshd@3-10.0.0.95:22-10.0.0.1:38650.service - OpenSSH per-connection server daemon (10.0.0.1:38650). Dec 12 17:38:07.631403 systemd-logind[1479]: Removed session 3. Dec 12 17:38:07.697951 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 38650 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:38:07.699991 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:38:07.704864 systemd-logind[1479]: New session 4 of user core. Dec 12 17:38:07.714269 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:38:07.774555 sshd[1658]: Connection closed by 10.0.0.1 port 38650 Dec 12 17:38:07.775098 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Dec 12 17:38:07.785093 systemd[1]: sshd@3-10.0.0.95:22-10.0.0.1:38650.service: Deactivated successfully. Dec 12 17:38:07.786547 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:38:07.788506 systemd-logind[1479]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:38:07.789682 systemd[1]: Started sshd@4-10.0.0.95:22-10.0.0.1:38660.service - OpenSSH per-connection server daemon (10.0.0.1:38660). Dec 12 17:38:07.790782 systemd-logind[1479]: Removed session 4. Dec 12 17:38:07.855324 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 38660 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:38:07.856708 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:38:07.860827 systemd-logind[1479]: New session 5 of user core. Dec 12 17:38:07.871257 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:38:07.929866 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:38:07.930169 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:38:07.951158 sudo[1668]: pam_unix(sudo:session): session closed for user root Dec 12 17:38:07.952978 sshd[1667]: Connection closed by 10.0.0.1 port 38660 Dec 12 17:38:07.953668 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Dec 12 17:38:07.967448 systemd[1]: sshd@4-10.0.0.95:22-10.0.0.1:38660.service: Deactivated successfully. Dec 12 17:38:07.972538 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:38:07.973659 systemd-logind[1479]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:38:07.978093 systemd[1]: Started sshd@5-10.0.0.95:22-10.0.0.1:38662.service - OpenSSH per-connection server daemon (10.0.0.1:38662). Dec 12 17:38:07.979292 systemd-logind[1479]: Removed session 5. Dec 12 17:38:08.051348 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 38662 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:38:08.052748 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:38:08.058783 systemd-logind[1479]: New session 6 of user core. Dec 12 17:38:08.073295 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:38:08.124233 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:38:08.124515 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:38:08.201466 sudo[1679]: pam_unix(sudo:session): session closed for user root Dec 12 17:38:08.206413 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:38:08.206665 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:38:08.216044 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:38:08.261206 augenrules[1701]: No rules Dec 12 17:38:08.262836 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:38:08.263084 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:38:08.263995 sudo[1678]: pam_unix(sudo:session): session closed for user root Dec 12 17:38:08.266217 sshd[1677]: Connection closed by 10.0.0.1 port 38662 Dec 12 17:38:08.266278 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Dec 12 17:38:08.281122 systemd[1]: sshd@5-10.0.0.95:22-10.0.0.1:38662.service: Deactivated successfully. Dec 12 17:38:08.283295 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:38:08.284252 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:38:08.286262 systemd[1]: Started sshd@6-10.0.0.95:22-10.0.0.1:38664.service - OpenSSH per-connection server daemon (10.0.0.1:38664). Dec 12 17:38:08.287229 systemd-logind[1479]: Removed session 6. Dec 12 17:38:08.347303 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 38664 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:38:08.349905 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:38:08.355474 systemd-logind[1479]: New session 7 of user core. Dec 12 17:38:08.375226 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:38:08.426182 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:38:08.426717 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:38:08.709685 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:38:08.725430 (dockerd)[1735]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:38:08.931584 dockerd[1735]: time="2025-12-12T17:38:08.931516459Z" level=info msg="Starting up" Dec 12 17:38:08.932393 dockerd[1735]: time="2025-12-12T17:38:08.932367891Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:38:08.942579 dockerd[1735]: time="2025-12-12T17:38:08.942540923Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:38:08.994837 dockerd[1735]: time="2025-12-12T17:38:08.994735331Z" level=info msg="Loading containers: start." Dec 12 17:38:09.003105 kernel: Initializing XFRM netlink socket Dec 12 17:38:09.235229 systemd-networkd[1426]: docker0: Link UP Dec 12 17:38:09.243714 dockerd[1735]: time="2025-12-12T17:38:09.243652483Z" level=info msg="Loading containers: done." Dec 12 17:38:09.262386 dockerd[1735]: time="2025-12-12T17:38:09.262128071Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:38:09.262386 dockerd[1735]: time="2025-12-12T17:38:09.262239122Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:38:09.262386 dockerd[1735]: time="2025-12-12T17:38:09.262337364Z" level=info msg="Initializing buildkit" Dec 12 17:38:09.290748 dockerd[1735]: time="2025-12-12T17:38:09.290694886Z" level=info msg="Completed buildkit initialization" Dec 12 17:38:09.298415 dockerd[1735]: time="2025-12-12T17:38:09.298355054Z" level=info msg="Daemon has completed initialization" Dec 12 17:38:09.298674 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:38:09.300649 dockerd[1735]: time="2025-12-12T17:38:09.298471363Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:38:09.905600 containerd[1496]: time="2025-12-12T17:38:09.905539441Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 17:38:10.553049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224820462.mount: Deactivated successfully. Dec 12 17:38:11.575220 containerd[1496]: time="2025-12-12T17:38:11.575141841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:11.576437 containerd[1496]: time="2025-12-12T17:38:11.576193256Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387283" Dec 12 17:38:11.577222 containerd[1496]: time="2025-12-12T17:38:11.577188191Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:11.580443 containerd[1496]: time="2025-12-12T17:38:11.580408264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:11.581854 containerd[1496]: time="2025-12-12T17:38:11.581815726Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.676228798s" Dec 12 17:38:11.581917 containerd[1496]: time="2025-12-12T17:38:11.581861135Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 12 17:38:11.583177 containerd[1496]: time="2025-12-12T17:38:11.583153071Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 17:38:12.706163 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:38:12.708078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:38:12.862939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:12.867011 (kubelet)[2023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:38:12.875282 containerd[1496]: time="2025-12-12T17:38:12.874368282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:12.876588 containerd[1496]: time="2025-12-12T17:38:12.876553295Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553083" Dec 12 17:38:12.879571 containerd[1496]: time="2025-12-12T17:38:12.878360331Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:12.884223 containerd[1496]: time="2025-12-12T17:38:12.882753332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:12.884223 containerd[1496]: time="2025-12-12T17:38:12.883759223Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.300574185s" Dec 12 17:38:12.884223 containerd[1496]: time="2025-12-12T17:38:12.883787534Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 12 17:38:12.884775 containerd[1496]: time="2025-12-12T17:38:12.884742900Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 17:38:12.908913 kubelet[2023]: E1212 17:38:12.908864 2023 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:38:12.913553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:38:12.913696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:38:12.915172 systemd[1]: kubelet.service: Consumed 156ms CPU time, 107.6M memory peak. Dec 12 17:38:14.043286 containerd[1496]: time="2025-12-12T17:38:14.043217748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:14.044092 containerd[1496]: time="2025-12-12T17:38:14.044025681Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298069" Dec 12 17:38:14.044794 containerd[1496]: time="2025-12-12T17:38:14.044754945Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:14.048164 containerd[1496]: time="2025-12-12T17:38:14.048114841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:14.048424 containerd[1496]: time="2025-12-12T17:38:14.048394612Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.163534666s" Dec 12 17:38:14.048474 containerd[1496]: time="2025-12-12T17:38:14.048430119Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 12 17:38:14.048914 containerd[1496]: time="2025-12-12T17:38:14.048889551Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 17:38:15.047202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3658545945.mount: Deactivated successfully. Dec 12 17:38:15.300298 containerd[1496]: time="2025-12-12T17:38:15.300127557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:15.301342 containerd[1496]: time="2025-12-12T17:38:15.301148092Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258675" Dec 12 17:38:15.302651 containerd[1496]: time="2025-12-12T17:38:15.302616050Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:15.304500 containerd[1496]: time="2025-12-12T17:38:15.304469808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:15.304989 containerd[1496]: time="2025-12-12T17:38:15.304954693Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.256032764s" Dec 12 17:38:15.305044 containerd[1496]: time="2025-12-12T17:38:15.304990994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 12 17:38:15.305615 containerd[1496]: time="2025-12-12T17:38:15.305580332Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 17:38:15.830855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3335557523.mount: Deactivated successfully. Dec 12 17:38:16.616400 containerd[1496]: time="2025-12-12T17:38:16.615710500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:16.617876 containerd[1496]: time="2025-12-12T17:38:16.617836069Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Dec 12 17:38:16.619058 containerd[1496]: time="2025-12-12T17:38:16.619028602Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:16.622774 containerd[1496]: time="2025-12-12T17:38:16.622730223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:16.624148 containerd[1496]: time="2025-12-12T17:38:16.624114995Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.318500247s" Dec 12 17:38:16.624208 containerd[1496]: time="2025-12-12T17:38:16.624153371Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 12 17:38:16.624610 containerd[1496]: time="2025-12-12T17:38:16.624587202Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:38:17.044635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2643351284.mount: Deactivated successfully. Dec 12 17:38:17.064239 containerd[1496]: time="2025-12-12T17:38:17.064170873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:38:17.065180 containerd[1496]: time="2025-12-12T17:38:17.065144151Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 12 17:38:17.066308 containerd[1496]: time="2025-12-12T17:38:17.066256446Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:38:17.068196 containerd[1496]: time="2025-12-12T17:38:17.068140443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:38:17.069122 containerd[1496]: time="2025-12-12T17:38:17.068753462Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 444.132694ms" Dec 12 17:38:17.069122 containerd[1496]: time="2025-12-12T17:38:17.068783901Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:38:17.069429 containerd[1496]: time="2025-12-12T17:38:17.069386468Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 17:38:17.732203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2413726488.mount: Deactivated successfully. Dec 12 17:38:19.180615 containerd[1496]: time="2025-12-12T17:38:19.180564279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:19.181685 containerd[1496]: time="2025-12-12T17:38:19.181355730Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013653" Dec 12 17:38:19.182466 containerd[1496]: time="2025-12-12T17:38:19.182430897Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:19.185586 containerd[1496]: time="2025-12-12T17:38:19.185535401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:19.186720 containerd[1496]: time="2025-12-12T17:38:19.186670067Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.11724968s" Dec 12 17:38:19.186720 containerd[1496]: time="2025-12-12T17:38:19.186710066Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 12 17:38:22.885627 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:22.885762 systemd[1]: kubelet.service: Consumed 156ms CPU time, 107.6M memory peak. Dec 12 17:38:22.887675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:38:22.911682 systemd[1]: Reload requested from client PID 2185 ('systemctl') (unit session-7.scope)... Dec 12 17:38:22.911698 systemd[1]: Reloading... Dec 12 17:38:22.988553 zram_generator::config[2231]: No configuration found. Dec 12 17:38:23.175979 systemd[1]: Reloading finished in 263 ms. Dec 12 17:38:23.236021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:23.238930 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:38:23.240408 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:38:23.240780 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:23.240892 systemd[1]: kubelet.service: Consumed 101ms CPU time, 95.2M memory peak. Dec 12 17:38:23.242543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:38:23.401178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:23.406210 (kubelet)[2276]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:38:23.449749 kubelet[2276]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:38:23.449749 kubelet[2276]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:38:23.449749 kubelet[2276]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:38:23.450084 kubelet[2276]: I1212 17:38:23.449740 2276 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:38:24.333225 kubelet[2276]: I1212 17:38:24.333173 2276 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:38:24.333225 kubelet[2276]: I1212 17:38:24.333204 2276 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:38:24.333451 kubelet[2276]: I1212 17:38:24.333422 2276 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:38:24.356933 kubelet[2276]: E1212 17:38:24.355833 2276 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:38:24.357671 kubelet[2276]: I1212 17:38:24.357645 2276 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:38:24.368428 kubelet[2276]: I1212 17:38:24.368394 2276 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:38:24.371166 kubelet[2276]: I1212 17:38:24.371140 2276 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:38:24.372180 kubelet[2276]: I1212 17:38:24.372136 2276 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:38:24.372343 kubelet[2276]: I1212 17:38:24.372180 2276 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:38:24.372428 kubelet[2276]: I1212 17:38:24.372404 2276 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:38:24.372428 kubelet[2276]: I1212 17:38:24.372413 2276 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:38:24.373048 kubelet[2276]: I1212 17:38:24.372592 2276 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:38:24.375173 kubelet[2276]: I1212 17:38:24.375137 2276 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:38:24.375173 kubelet[2276]: I1212 17:38:24.375166 2276 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:38:24.375281 kubelet[2276]: I1212 17:38:24.375269 2276 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:38:24.376732 kubelet[2276]: I1212 17:38:24.376636 2276 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:38:24.377729 kubelet[2276]: I1212 17:38:24.377699 2276 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:38:24.378700 kubelet[2276]: I1212 17:38:24.378668 2276 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:38:24.378810 kubelet[2276]: W1212 17:38:24.378797 2276 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:38:24.379872 kubelet[2276]: E1212 17:38:24.379842 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:38:24.380636 kubelet[2276]: E1212 17:38:24.380597 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:38:24.381440 kubelet[2276]: I1212 17:38:24.381421 2276 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:38:24.381486 kubelet[2276]: I1212 17:38:24.381467 2276 server.go:1289] "Started kubelet" Dec 12 17:38:24.381585 kubelet[2276]: I1212 17:38:24.381555 2276 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:38:24.387109 kubelet[2276]: I1212 17:38:24.387014 2276 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:38:24.387632 kubelet[2276]: I1212 17:38:24.387363 2276 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:38:24.387883 kubelet[2276]: I1212 17:38:24.387822 2276 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:38:24.388042 kubelet[2276]: I1212 17:38:24.388012 2276 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:38:24.389051 kubelet[2276]: E1212 17:38:24.387989 2276 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1880887857e6e37d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:38:24.381436797 +0000 UTC m=+0.971381849,LastTimestamp:2025-12-12 17:38:24.381436797 +0000 UTC m=+0.971381849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:38:24.389492 kubelet[2276]: I1212 17:38:24.389449 2276 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:38:24.389698 kubelet[2276]: E1212 17:38:24.389681 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:38:24.389773 kubelet[2276]: I1212 17:38:24.389764 2276 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:38:24.390047 kubelet[2276]: I1212 17:38:24.390027 2276 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:38:24.390210 kubelet[2276]: I1212 17:38:24.390197 2276 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:38:24.390710 kubelet[2276]: E1212 17:38:24.390684 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:38:24.394112 kubelet[2276]: I1212 17:38:24.391648 2276 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:38:24.394112 kubelet[2276]: E1212 17:38:24.393333 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="200ms" Dec 12 17:38:24.394112 kubelet[2276]: I1212 17:38:24.393501 2276 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:38:24.394112 kubelet[2276]: I1212 17:38:24.393512 2276 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:38:24.398183 kubelet[2276]: E1212 17:38:24.397718 2276 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:38:24.404723 kubelet[2276]: I1212 17:38:24.404668 2276 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:38:24.404723 kubelet[2276]: I1212 17:38:24.404695 2276 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:38:24.404723 kubelet[2276]: I1212 17:38:24.404730 2276 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:38:24.408502 kubelet[2276]: I1212 17:38:24.408460 2276 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:38:24.409768 kubelet[2276]: I1212 17:38:24.409747 2276 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:38:24.409868 kubelet[2276]: I1212 17:38:24.409859 2276 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:38:24.410153 kubelet[2276]: I1212 17:38:24.410134 2276 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:38:24.410233 kubelet[2276]: I1212 17:38:24.410222 2276 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:38:24.410325 kubelet[2276]: E1212 17:38:24.410301 2276 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:38:24.411528 kubelet[2276]: E1212 17:38:24.411505 2276 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:38:24.489879 kubelet[2276]: E1212 17:38:24.489831 2276 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:38:24.511292 kubelet[2276]: E1212 17:38:24.511222 2276 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 17:38:24.544155 kubelet[2276]: I1212 17:38:24.544099 2276 policy_none.go:49] "None policy: Start" Dec 12 17:38:24.544155 kubelet[2276]: I1212 17:38:24.544136 2276 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:38:24.544155 kubelet[2276]: I1212 17:38:24.544150 2276 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:38:24.549892 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:38:24.560381 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:38:24.563186 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:38:24.577816 kubelet[2276]: E1212 17:38:24.577788 2276 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:38:24.578048 kubelet[2276]: I1212 17:38:24.578019 2276 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:38:24.578114 kubelet[2276]: I1212 17:38:24.578042 2276 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:38:24.578558 kubelet[2276]: I1212 17:38:24.578503 2276 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:38:24.579467 kubelet[2276]: E1212 17:38:24.578976 2276 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:38:24.579467 kubelet[2276]: E1212 17:38:24.579037 2276 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:38:24.594835 kubelet[2276]: E1212 17:38:24.593813 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="400ms" Dec 12 17:38:24.679718 kubelet[2276]: I1212 17:38:24.679583 2276 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:38:24.680031 kubelet[2276]: E1212 17:38:24.679984 2276 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Dec 12 17:38:24.722412 systemd[1]: Created slice kubepods-burstable-podf36d543f213d7afaec043ff8dd2098bc.slice - libcontainer container kubepods-burstable-podf36d543f213d7afaec043ff8dd2098bc.slice. Dec 12 17:38:24.732822 kubelet[2276]: E1212 17:38:24.732777 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:24.734782 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 12 17:38:24.737397 kubelet[2276]: E1212 17:38:24.737275 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:24.754241 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 12 17:38:24.756058 kubelet[2276]: E1212 17:38:24.756013 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:24.791338 kubelet[2276]: I1212 17:38:24.791289 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f36d543f213d7afaec043ff8dd2098bc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f36d543f213d7afaec043ff8dd2098bc\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:24.791338 kubelet[2276]: I1212 17:38:24.791334 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:24.791501 kubelet[2276]: I1212 17:38:24.791353 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:24.791501 kubelet[2276]: I1212 17:38:24.791369 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:24.791501 kubelet[2276]: I1212 17:38:24.791386 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:24.791501 kubelet[2276]: I1212 17:38:24.791398 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f36d543f213d7afaec043ff8dd2098bc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f36d543f213d7afaec043ff8dd2098bc\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:24.791501 kubelet[2276]: I1212 17:38:24.791411 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f36d543f213d7afaec043ff8dd2098bc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f36d543f213d7afaec043ff8dd2098bc\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:24.791607 kubelet[2276]: I1212 17:38:24.791424 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:24.791607 kubelet[2276]: I1212 17:38:24.791439 2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:24.881929 kubelet[2276]: I1212 17:38:24.881822 2276 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:38:24.882400 kubelet[2276]: E1212 17:38:24.882367 2276 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Dec 12 17:38:24.994585 kubelet[2276]: E1212 17:38:24.994531 2276 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="800ms" Dec 12 17:38:25.034765 containerd[1496]: time="2025-12-12T17:38:25.034705941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f36d543f213d7afaec043ff8dd2098bc,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:25.038394 containerd[1496]: time="2025-12-12T17:38:25.038343092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:25.057141 containerd[1496]: time="2025-12-12T17:38:25.057095375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:25.065121 containerd[1496]: time="2025-12-12T17:38:25.065057138Z" level=info msg="connecting to shim 39daac0c81ae8f6f31991c9f2cf786ea5022ed108651a5d073892b14f4738c4b" address="unix:///run/containerd/s/71a3c9ee840f4747d78a39e721565e6230c7f81b6d5b2090fae5f4763e30b3ea" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:25.070372 containerd[1496]: time="2025-12-12T17:38:25.067318807Z" level=info msg="connecting to shim bb8334cbb97599378e376c7f8f63c24d703e6ac3f4f11dc89ab47a89ffa63fb6" address="unix:///run/containerd/s/bf8a1ca1d5603b2246e53eab8e1d7ca0443a23efe0e08ad5982744cefac27e22" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:25.092360 containerd[1496]: time="2025-12-12T17:38:25.092307538Z" level=info msg="connecting to shim 1032b3087a5013269c2af26a689d8dc4233bcbc7cf54de03031ecd97fc704afc" address="unix:///run/containerd/s/aa365da9dcd41d85d0f0498d877440e01b42b9ae5a0ee3989bba5a41a60fcb3e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:25.094291 systemd[1]: Started cri-containerd-39daac0c81ae8f6f31991c9f2cf786ea5022ed108651a5d073892b14f4738c4b.scope - libcontainer container 39daac0c81ae8f6f31991c9f2cf786ea5022ed108651a5d073892b14f4738c4b. Dec 12 17:38:25.097958 systemd[1]: Started cri-containerd-bb8334cbb97599378e376c7f8f63c24d703e6ac3f4f11dc89ab47a89ffa63fb6.scope - libcontainer container bb8334cbb97599378e376c7f8f63c24d703e6ac3f4f11dc89ab47a89ffa63fb6. Dec 12 17:38:25.120222 systemd[1]: Started cri-containerd-1032b3087a5013269c2af26a689d8dc4233bcbc7cf54de03031ecd97fc704afc.scope - libcontainer container 1032b3087a5013269c2af26a689d8dc4233bcbc7cf54de03031ecd97fc704afc. Dec 12 17:38:25.142352 containerd[1496]: time="2025-12-12T17:38:25.141950173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"39daac0c81ae8f6f31991c9f2cf786ea5022ed108651a5d073892b14f4738c4b\"" Dec 12 17:38:25.144785 containerd[1496]: time="2025-12-12T17:38:25.144734871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f36d543f213d7afaec043ff8dd2098bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb8334cbb97599378e376c7f8f63c24d703e6ac3f4f11dc89ab47a89ffa63fb6\"" Dec 12 17:38:25.148804 containerd[1496]: time="2025-12-12T17:38:25.148764874Z" level=info msg="CreateContainer within sandbox \"39daac0c81ae8f6f31991c9f2cf786ea5022ed108651a5d073892b14f4738c4b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:38:25.150072 containerd[1496]: time="2025-12-12T17:38:25.150020103Z" level=info msg="CreateContainer within sandbox \"bb8334cbb97599378e376c7f8f63c24d703e6ac3f4f11dc89ab47a89ffa63fb6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:38:25.159509 containerd[1496]: time="2025-12-12T17:38:25.159454110Z" level=info msg="Container 82cb8c776e15d420c751e1c932da3d39fa09cf67e34a360095244e08637a6bda: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:25.162941 containerd[1496]: time="2025-12-12T17:38:25.162244491Z" level=info msg="Container f600b578aaba751ecbf1f78262a6d19013ffca142d9d1c74fb6a68cb3fa7f1d0: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:25.168959 containerd[1496]: time="2025-12-12T17:38:25.168923772Z" level=info msg="CreateContainer within sandbox \"39daac0c81ae8f6f31991c9f2cf786ea5022ed108651a5d073892b14f4738c4b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"82cb8c776e15d420c751e1c932da3d39fa09cf67e34a360095244e08637a6bda\"" Dec 12 17:38:25.169710 containerd[1496]: time="2025-12-12T17:38:25.169609032Z" level=info msg="StartContainer for \"82cb8c776e15d420c751e1c932da3d39fa09cf67e34a360095244e08637a6bda\"" Dec 12 17:38:25.170438 containerd[1496]: time="2025-12-12T17:38:25.170407822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1032b3087a5013269c2af26a689d8dc4233bcbc7cf54de03031ecd97fc704afc\"" Dec 12 17:38:25.170773 containerd[1496]: time="2025-12-12T17:38:25.170749171Z" level=info msg="connecting to shim 82cb8c776e15d420c751e1c932da3d39fa09cf67e34a360095244e08637a6bda" address="unix:///run/containerd/s/71a3c9ee840f4747d78a39e721565e6230c7f81b6d5b2090fae5f4763e30b3ea" protocol=ttrpc version=3 Dec 12 17:38:25.174474 containerd[1496]: time="2025-12-12T17:38:25.174442106Z" level=info msg="CreateContainer within sandbox \"1032b3087a5013269c2af26a689d8dc4233bcbc7cf54de03031ecd97fc704afc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:38:25.175215 containerd[1496]: time="2025-12-12T17:38:25.174813829Z" level=info msg="CreateContainer within sandbox \"bb8334cbb97599378e376c7f8f63c24d703e6ac3f4f11dc89ab47a89ffa63fb6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f600b578aaba751ecbf1f78262a6d19013ffca142d9d1c74fb6a68cb3fa7f1d0\"" Dec 12 17:38:25.175492 containerd[1496]: time="2025-12-12T17:38:25.175464794Z" level=info msg="StartContainer for \"f600b578aaba751ecbf1f78262a6d19013ffca142d9d1c74fb6a68cb3fa7f1d0\"" Dec 12 17:38:25.176569 containerd[1496]: time="2025-12-12T17:38:25.176535622Z" level=info msg="connecting to shim f600b578aaba751ecbf1f78262a6d19013ffca142d9d1c74fb6a68cb3fa7f1d0" address="unix:///run/containerd/s/bf8a1ca1d5603b2246e53eab8e1d7ca0443a23efe0e08ad5982744cefac27e22" protocol=ttrpc version=3 Dec 12 17:38:25.190436 containerd[1496]: time="2025-12-12T17:38:25.190378397Z" level=info msg="Container 17f0ecf5bfe06276a2ead9a5348b45b01f17b0e416e9bef9c025758d6cac3596: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:25.192415 systemd[1]: Started cri-containerd-82cb8c776e15d420c751e1c932da3d39fa09cf67e34a360095244e08637a6bda.scope - libcontainer container 82cb8c776e15d420c751e1c932da3d39fa09cf67e34a360095244e08637a6bda. Dec 12 17:38:25.201478 containerd[1496]: time="2025-12-12T17:38:25.201431673Z" level=info msg="CreateContainer within sandbox \"1032b3087a5013269c2af26a689d8dc4233bcbc7cf54de03031ecd97fc704afc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"17f0ecf5bfe06276a2ead9a5348b45b01f17b0e416e9bef9c025758d6cac3596\"" Dec 12 17:38:25.201978 containerd[1496]: time="2025-12-12T17:38:25.201948499Z" level=info msg="StartContainer for \"17f0ecf5bfe06276a2ead9a5348b45b01f17b0e416e9bef9c025758d6cac3596\"" Dec 12 17:38:25.202931 containerd[1496]: time="2025-12-12T17:38:25.202892872Z" level=info msg="connecting to shim 17f0ecf5bfe06276a2ead9a5348b45b01f17b0e416e9bef9c025758d6cac3596" address="unix:///run/containerd/s/aa365da9dcd41d85d0f0498d877440e01b42b9ae5a0ee3989bba5a41a60fcb3e" protocol=ttrpc version=3 Dec 12 17:38:25.209376 systemd[1]: Started cri-containerd-f600b578aaba751ecbf1f78262a6d19013ffca142d9d1c74fb6a68cb3fa7f1d0.scope - libcontainer container f600b578aaba751ecbf1f78262a6d19013ffca142d9d1c74fb6a68cb3fa7f1d0. Dec 12 17:38:25.227258 systemd[1]: Started cri-containerd-17f0ecf5bfe06276a2ead9a5348b45b01f17b0e416e9bef9c025758d6cac3596.scope - libcontainer container 17f0ecf5bfe06276a2ead9a5348b45b01f17b0e416e9bef9c025758d6cac3596. Dec 12 17:38:25.241636 containerd[1496]: time="2025-12-12T17:38:25.241589519Z" level=info msg="StartContainer for \"82cb8c776e15d420c751e1c932da3d39fa09cf67e34a360095244e08637a6bda\" returns successfully" Dec 12 17:38:25.263489 containerd[1496]: time="2025-12-12T17:38:25.263454804Z" level=info msg="StartContainer for \"f600b578aaba751ecbf1f78262a6d19013ffca142d9d1c74fb6a68cb3fa7f1d0\" returns successfully" Dec 12 17:38:25.277824 containerd[1496]: time="2025-12-12T17:38:25.277773147Z" level=info msg="StartContainer for \"17f0ecf5bfe06276a2ead9a5348b45b01f17b0e416e9bef9c025758d6cac3596\" returns successfully" Dec 12 17:38:25.284380 kubelet[2276]: I1212 17:38:25.284292 2276 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:38:25.284637 kubelet[2276]: E1212 17:38:25.284602 2276 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Dec 12 17:38:25.419078 kubelet[2276]: E1212 17:38:25.418939 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:25.422272 kubelet[2276]: E1212 17:38:25.422242 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:25.424721 kubelet[2276]: E1212 17:38:25.424699 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:26.086290 kubelet[2276]: I1212 17:38:26.086256 2276 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:38:26.429119 kubelet[2276]: E1212 17:38:26.427330 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:26.429382 kubelet[2276]: E1212 17:38:26.429367 2276 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:26.547593 kubelet[2276]: E1212 17:38:26.547551 2276 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:38:26.619097 kubelet[2276]: I1212 17:38:26.617750 2276 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:38:26.619097 kubelet[2276]: E1212 17:38:26.617792 2276 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 12 17:38:26.694268 kubelet[2276]: I1212 17:38:26.693263 2276 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:26.701197 kubelet[2276]: E1212 17:38:26.701046 2276 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:26.701197 kubelet[2276]: I1212 17:38:26.701196 2276 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:26.704004 kubelet[2276]: E1212 17:38:26.703808 2276 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:26.704004 kubelet[2276]: I1212 17:38:26.703834 2276 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:26.705797 kubelet[2276]: E1212 17:38:26.705771 2276 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:27.381196 kubelet[2276]: I1212 17:38:27.379830 2276 apiserver.go:52] "Watching apiserver" Dec 12 17:38:27.390848 kubelet[2276]: I1212 17:38:27.390788 2276 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:38:27.427871 kubelet[2276]: I1212 17:38:27.427845 2276 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:28.845183 systemd[1]: Reload requested from client PID 2563 ('systemctl') (unit session-7.scope)... Dec 12 17:38:28.845197 systemd[1]: Reloading... Dec 12 17:38:28.912105 zram_generator::config[2609]: No configuration found. Dec 12 17:38:29.072923 systemd[1]: Reloading finished in 227 ms. Dec 12 17:38:29.105288 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:38:29.121138 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:38:29.122160 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:29.122221 systemd[1]: kubelet.service: Consumed 1.273s CPU time, 128M memory peak. Dec 12 17:38:29.123869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:38:29.281465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:29.289338 (kubelet)[2648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:38:29.329231 kubelet[2648]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:38:29.329231 kubelet[2648]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:38:29.329231 kubelet[2648]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:38:29.329564 kubelet[2648]: I1212 17:38:29.329260 2648 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:38:29.334569 kubelet[2648]: I1212 17:38:29.334525 2648 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:38:29.334569 kubelet[2648]: I1212 17:38:29.334557 2648 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:38:29.334791 kubelet[2648]: I1212 17:38:29.334761 2648 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:38:29.335999 kubelet[2648]: I1212 17:38:29.335979 2648 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:38:29.339511 kubelet[2648]: I1212 17:38:29.339485 2648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:38:29.344889 kubelet[2648]: I1212 17:38:29.344864 2648 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:38:29.347538 kubelet[2648]: I1212 17:38:29.347493 2648 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:38:29.347859 kubelet[2648]: I1212 17:38:29.347815 2648 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:38:29.348031 kubelet[2648]: I1212 17:38:29.347863 2648 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:38:29.348127 kubelet[2648]: I1212 17:38:29.348047 2648 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:38:29.348127 kubelet[2648]: I1212 17:38:29.348058 2648 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:38:29.348175 kubelet[2648]: I1212 17:38:29.348136 2648 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:38:29.348356 kubelet[2648]: I1212 17:38:29.348342 2648 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:38:29.348386 kubelet[2648]: I1212 17:38:29.348357 2648 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:38:29.348406 kubelet[2648]: I1212 17:38:29.348388 2648 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:38:29.348406 kubelet[2648]: I1212 17:38:29.348400 2648 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:38:29.349403 kubelet[2648]: I1212 17:38:29.349375 2648 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:38:29.350006 kubelet[2648]: I1212 17:38:29.349978 2648 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:38:29.352995 kubelet[2648]: I1212 17:38:29.352960 2648 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:38:29.352995 kubelet[2648]: I1212 17:38:29.353000 2648 server.go:1289] "Started kubelet" Dec 12 17:38:29.353344 kubelet[2648]: I1212 17:38:29.353132 2648 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:38:29.353344 kubelet[2648]: I1212 17:38:29.353181 2648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:38:29.353455 kubelet[2648]: I1212 17:38:29.353431 2648 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:38:29.354260 kubelet[2648]: I1212 17:38:29.354242 2648 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:38:29.358211 kubelet[2648]: I1212 17:38:29.356776 2648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:38:29.363753 kubelet[2648]: I1212 17:38:29.362470 2648 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:38:29.366136 kubelet[2648]: I1212 17:38:29.358681 2648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:38:29.366896 kubelet[2648]: I1212 17:38:29.366224 2648 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:38:29.366896 kubelet[2648]: I1212 17:38:29.366736 2648 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:38:29.368432 kubelet[2648]: E1212 17:38:29.368386 2648 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:38:29.371798 kubelet[2648]: E1212 17:38:29.371773 2648 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:38:29.373652 kubelet[2648]: I1212 17:38:29.373631 2648 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:38:29.373915 kubelet[2648]: I1212 17:38:29.373825 2648 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:38:29.374702 kubelet[2648]: I1212 17:38:29.374673 2648 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:38:29.376956 kubelet[2648]: I1212 17:38:29.376609 2648 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:38:29.381835 kubelet[2648]: I1212 17:38:29.381802 2648 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:38:29.381914 kubelet[2648]: I1212 17:38:29.381906 2648 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:38:29.381951 kubelet[2648]: I1212 17:38:29.381935 2648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:38:29.381951 kubelet[2648]: I1212 17:38:29.381948 2648 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:38:29.382009 kubelet[2648]: E1212 17:38:29.381993 2648 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:38:29.409479 kubelet[2648]: I1212 17:38:29.409455 2648 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:38:29.409601 kubelet[2648]: I1212 17:38:29.409588 2648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:38:29.409676 kubelet[2648]: I1212 17:38:29.409668 2648 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:38:29.409889 kubelet[2648]: I1212 17:38:29.409862 2648 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:38:29.409977 kubelet[2648]: I1212 17:38:29.409950 2648 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:38:29.410028 kubelet[2648]: I1212 17:38:29.410020 2648 policy_none.go:49] "None policy: Start" Dec 12 17:38:29.410098 kubelet[2648]: I1212 17:38:29.410089 2648 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:38:29.410164 kubelet[2648]: I1212 17:38:29.410156 2648 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:38:29.410314 kubelet[2648]: I1212 17:38:29.410293 2648 state_mem.go:75] "Updated machine memory state" Dec 12 17:38:29.413900 kubelet[2648]: E1212 17:38:29.413879 2648 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:38:29.414557 kubelet[2648]: I1212 17:38:29.414541 2648 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:38:29.414823 kubelet[2648]: I1212 17:38:29.414790 2648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:38:29.415146 kubelet[2648]: I1212 17:38:29.415128 2648 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:38:29.416865 kubelet[2648]: E1212 17:38:29.416707 2648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:38:29.483692 kubelet[2648]: I1212 17:38:29.483660 2648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:29.483925 kubelet[2648]: I1212 17:38:29.483723 2648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:29.484000 kubelet[2648]: I1212 17:38:29.483726 2648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:29.490263 kubelet[2648]: E1212 17:38:29.490234 2648 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:29.516632 kubelet[2648]: I1212 17:38:29.516597 2648 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:38:29.526044 kubelet[2648]: I1212 17:38:29.526001 2648 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:38:29.526138 kubelet[2648]: I1212 17:38:29.526107 2648 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:38:29.568508 kubelet[2648]: I1212 17:38:29.568460 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f36d543f213d7afaec043ff8dd2098bc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f36d543f213d7afaec043ff8dd2098bc\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:29.568617 kubelet[2648]: I1212 17:38:29.568528 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:29.568617 kubelet[2648]: I1212 17:38:29.568560 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:29.568617 kubelet[2648]: I1212 17:38:29.568603 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:29.568708 kubelet[2648]: I1212 17:38:29.568624 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:29.568708 kubelet[2648]: I1212 17:38:29.568642 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f36d543f213d7afaec043ff8dd2098bc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f36d543f213d7afaec043ff8dd2098bc\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:29.568708 kubelet[2648]: I1212 17:38:29.568686 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f36d543f213d7afaec043ff8dd2098bc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f36d543f213d7afaec043ff8dd2098bc\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:29.568766 kubelet[2648]: I1212 17:38:29.568725 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:29.568766 kubelet[2648]: I1212 17:38:29.568751 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:30.349366 kubelet[2648]: I1212 17:38:30.349309 2648 apiserver.go:52] "Watching apiserver" Dec 12 17:38:30.366544 kubelet[2648]: I1212 17:38:30.366476 2648 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:38:30.394191 kubelet[2648]: I1212 17:38:30.394152 2648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:30.394790 kubelet[2648]: I1212 17:38:30.394727 2648 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:30.403759 kubelet[2648]: E1212 17:38:30.403704 2648 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:30.406326 kubelet[2648]: E1212 17:38:30.406296 2648 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:30.443385 kubelet[2648]: I1212 17:38:30.443269 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.44325343 podStartE2EDuration="1.44325343s" podCreationTimestamp="2025-12-12 17:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:38:30.440403911 +0000 UTC m=+1.145861110" watchObservedRunningTime="2025-12-12 17:38:30.44325343 +0000 UTC m=+1.148710629" Dec 12 17:38:30.454264 kubelet[2648]: I1212 17:38:30.454209 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.454191525 podStartE2EDuration="3.454191525s" podCreationTimestamp="2025-12-12 17:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:38:30.453761428 +0000 UTC m=+1.159218627" watchObservedRunningTime="2025-12-12 17:38:30.454191525 +0000 UTC m=+1.159648724" Dec 12 17:38:30.486211 kubelet[2648]: I1212 17:38:30.486128 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.486111448 podStartE2EDuration="1.486111448s" podCreationTimestamp="2025-12-12 17:38:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:38:30.472255659 +0000 UTC m=+1.177712858" watchObservedRunningTime="2025-12-12 17:38:30.486111448 +0000 UTC m=+1.191568687" Dec 12 17:38:34.031975 kubelet[2648]: I1212 17:38:34.031936 2648 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:38:34.032752 kubelet[2648]: I1212 17:38:34.032491 2648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:38:34.032804 containerd[1496]: time="2025-12-12T17:38:34.032302860Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:38:35.025136 systemd[1]: Created slice kubepods-besteffort-pod936b9a5c_157c_4d85_9277_66afd1e59564.slice - libcontainer container kubepods-besteffort-pod936b9a5c_157c_4d85_9277_66afd1e59564.slice. Dec 12 17:38:35.102212 kubelet[2648]: I1212 17:38:35.102168 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/936b9a5c-157c-4d85-9277-66afd1e59564-xtables-lock\") pod \"kube-proxy-rx784\" (UID: \"936b9a5c-157c-4d85-9277-66afd1e59564\") " pod="kube-system/kube-proxy-rx784" Dec 12 17:38:35.102212 kubelet[2648]: I1212 17:38:35.102217 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/936b9a5c-157c-4d85-9277-66afd1e59564-kube-proxy\") pod \"kube-proxy-rx784\" (UID: \"936b9a5c-157c-4d85-9277-66afd1e59564\") " pod="kube-system/kube-proxy-rx784" Dec 12 17:38:35.102666 kubelet[2648]: I1212 17:38:35.102232 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/936b9a5c-157c-4d85-9277-66afd1e59564-lib-modules\") pod \"kube-proxy-rx784\" (UID: \"936b9a5c-157c-4d85-9277-66afd1e59564\") " pod="kube-system/kube-proxy-rx784" Dec 12 17:38:35.102666 kubelet[2648]: I1212 17:38:35.102293 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g87wh\" (UniqueName: \"kubernetes.io/projected/936b9a5c-157c-4d85-9277-66afd1e59564-kube-api-access-g87wh\") pod \"kube-proxy-rx784\" (UID: \"936b9a5c-157c-4d85-9277-66afd1e59564\") " pod="kube-system/kube-proxy-rx784" Dec 12 17:38:35.138897 systemd[1]: Created slice kubepods-besteffort-pod0a2a49d4_7324_4db8_91b5_3828c174df65.slice - libcontainer container kubepods-besteffort-pod0a2a49d4_7324_4db8_91b5_3828c174df65.slice. Dec 12 17:38:35.203562 kubelet[2648]: I1212 17:38:35.203480 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzv9q\" (UniqueName: \"kubernetes.io/projected/0a2a49d4-7324-4db8-91b5-3828c174df65-kube-api-access-nzv9q\") pod \"tigera-operator-7dcd859c48-xh7wk\" (UID: \"0a2a49d4-7324-4db8-91b5-3828c174df65\") " pod="tigera-operator/tigera-operator-7dcd859c48-xh7wk" Dec 12 17:38:35.203700 kubelet[2648]: I1212 17:38:35.203581 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0a2a49d4-7324-4db8-91b5-3828c174df65-var-lib-calico\") pod \"tigera-operator-7dcd859c48-xh7wk\" (UID: \"0a2a49d4-7324-4db8-91b5-3828c174df65\") " pod="tigera-operator/tigera-operator-7dcd859c48-xh7wk" Dec 12 17:38:35.336711 containerd[1496]: time="2025-12-12T17:38:35.336646443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rx784,Uid:936b9a5c-157c-4d85-9277-66afd1e59564,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:35.353657 containerd[1496]: time="2025-12-12T17:38:35.353301466Z" level=info msg="connecting to shim f72ba6c6ccc2c85931fc86372fb5470e9ffea014f0cb6d495e99a7265eb641bd" address="unix:///run/containerd/s/d1f87b087261099c7dfe2f0bdcc7be36bffc7408e7c518d1548578276bc21cbf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:35.389315 systemd[1]: Started cri-containerd-f72ba6c6ccc2c85931fc86372fb5470e9ffea014f0cb6d495e99a7265eb641bd.scope - libcontainer container f72ba6c6ccc2c85931fc86372fb5470e9ffea014f0cb6d495e99a7265eb641bd. Dec 12 17:38:35.412268 containerd[1496]: time="2025-12-12T17:38:35.412230166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rx784,Uid:936b9a5c-157c-4d85-9277-66afd1e59564,Namespace:kube-system,Attempt:0,} returns sandbox id \"f72ba6c6ccc2c85931fc86372fb5470e9ffea014f0cb6d495e99a7265eb641bd\"" Dec 12 17:38:35.417989 containerd[1496]: time="2025-12-12T17:38:35.417953115Z" level=info msg="CreateContainer within sandbox \"f72ba6c6ccc2c85931fc86372fb5470e9ffea014f0cb6d495e99a7265eb641bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:38:35.429767 containerd[1496]: time="2025-12-12T17:38:35.429723973Z" level=info msg="Container 8a4013c41ed6b94f5117c30d8f39c69aaf4b54aaa90341d2c1f4004b936c3081: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:35.439849 containerd[1496]: time="2025-12-12T17:38:35.439719011Z" level=info msg="CreateContainer within sandbox \"f72ba6c6ccc2c85931fc86372fb5470e9ffea014f0cb6d495e99a7265eb641bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8a4013c41ed6b94f5117c30d8f39c69aaf4b54aaa90341d2c1f4004b936c3081\"" Dec 12 17:38:35.440564 containerd[1496]: time="2025-12-12T17:38:35.440536633Z" level=info msg="StartContainer for \"8a4013c41ed6b94f5117c30d8f39c69aaf4b54aaa90341d2c1f4004b936c3081\"" Dec 12 17:38:35.442190 containerd[1496]: time="2025-12-12T17:38:35.442166154Z" level=info msg="connecting to shim 8a4013c41ed6b94f5117c30d8f39c69aaf4b54aaa90341d2c1f4004b936c3081" address="unix:///run/containerd/s/d1f87b087261099c7dfe2f0bdcc7be36bffc7408e7c518d1548578276bc21cbf" protocol=ttrpc version=3 Dec 12 17:38:35.444428 containerd[1496]: time="2025-12-12T17:38:35.444396031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xh7wk,Uid:0a2a49d4-7324-4db8-91b5-3828c174df65,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:38:35.464406 containerd[1496]: time="2025-12-12T17:38:35.464328860Z" level=info msg="connecting to shim 8c4ee2d7e494c97d2ea49696b59bd102d431e716494ef99ad00a109ed2fac651" address="unix:///run/containerd/s/5be5cf9ef0e6195d13746cbae72545486f05642f58a11af961e669a24dedba3f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:35.469319 systemd[1]: Started cri-containerd-8a4013c41ed6b94f5117c30d8f39c69aaf4b54aaa90341d2c1f4004b936c3081.scope - libcontainer container 8a4013c41ed6b94f5117c30d8f39c69aaf4b54aaa90341d2c1f4004b936c3081. Dec 12 17:38:35.490257 systemd[1]: Started cri-containerd-8c4ee2d7e494c97d2ea49696b59bd102d431e716494ef99ad00a109ed2fac651.scope - libcontainer container 8c4ee2d7e494c97d2ea49696b59bd102d431e716494ef99ad00a109ed2fac651. Dec 12 17:38:35.528767 containerd[1496]: time="2025-12-12T17:38:35.528644107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xh7wk,Uid:0a2a49d4-7324-4db8-91b5-3828c174df65,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8c4ee2d7e494c97d2ea49696b59bd102d431e716494ef99ad00a109ed2fac651\"" Dec 12 17:38:35.532441 containerd[1496]: time="2025-12-12T17:38:35.532399332Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:38:35.545839 containerd[1496]: time="2025-12-12T17:38:35.545788071Z" level=info msg="StartContainer for \"8a4013c41ed6b94f5117c30d8f39c69aaf4b54aaa90341d2c1f4004b936c3081\" returns successfully" Dec 12 17:38:36.217200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3633485105.mount: Deactivated successfully. Dec 12 17:38:36.425894 kubelet[2648]: I1212 17:38:36.425830 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rx784" podStartSLOduration=1.4258102080000001 podStartE2EDuration="1.425810208s" podCreationTimestamp="2025-12-12 17:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:38:36.425639228 +0000 UTC m=+7.131096427" watchObservedRunningTime="2025-12-12 17:38:36.425810208 +0000 UTC m=+7.131267407" Dec 12 17:38:37.071147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686532946.mount: Deactivated successfully. Dec 12 17:38:37.822303 containerd[1496]: time="2025-12-12T17:38:37.822249914Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:37.823438 containerd[1496]: time="2025-12-12T17:38:37.823389440Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 12 17:38:37.824241 containerd[1496]: time="2025-12-12T17:38:37.824215772Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:37.826601 containerd[1496]: time="2025-12-12T17:38:37.826566152Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:37.827217 containerd[1496]: time="2025-12-12T17:38:37.827187021Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.294743843s" Dec 12 17:38:37.827246 containerd[1496]: time="2025-12-12T17:38:37.827216984Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:38:37.831984 containerd[1496]: time="2025-12-12T17:38:37.831953949Z" level=info msg="CreateContainer within sandbox \"8c4ee2d7e494c97d2ea49696b59bd102d431e716494ef99ad00a109ed2fac651\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:38:37.869373 containerd[1496]: time="2025-12-12T17:38:37.869320327Z" level=info msg="Container 542ff8ef2e2584b866b9f25eff0b3de207e5b007e6e033b960eef8ff2f8ca0fc: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:37.871212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1600895149.mount: Deactivated successfully. Dec 12 17:38:37.875379 containerd[1496]: time="2025-12-12T17:38:37.875338674Z" level=info msg="CreateContainer within sandbox \"8c4ee2d7e494c97d2ea49696b59bd102d431e716494ef99ad00a109ed2fac651\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"542ff8ef2e2584b866b9f25eff0b3de207e5b007e6e033b960eef8ff2f8ca0fc\"" Dec 12 17:38:37.875951 containerd[1496]: time="2025-12-12T17:38:37.875928699Z" level=info msg="StartContainer for \"542ff8ef2e2584b866b9f25eff0b3de207e5b007e6e033b960eef8ff2f8ca0fc\"" Dec 12 17:38:37.876988 containerd[1496]: time="2025-12-12T17:38:37.876960493Z" level=info msg="connecting to shim 542ff8ef2e2584b866b9f25eff0b3de207e5b007e6e033b960eef8ff2f8ca0fc" address="unix:///run/containerd/s/5be5cf9ef0e6195d13746cbae72545486f05642f58a11af961e669a24dedba3f" protocol=ttrpc version=3 Dec 12 17:38:37.915279 systemd[1]: Started cri-containerd-542ff8ef2e2584b866b9f25eff0b3de207e5b007e6e033b960eef8ff2f8ca0fc.scope - libcontainer container 542ff8ef2e2584b866b9f25eff0b3de207e5b007e6e033b960eef8ff2f8ca0fc. Dec 12 17:38:37.945214 containerd[1496]: time="2025-12-12T17:38:37.945174008Z" level=info msg="StartContainer for \"542ff8ef2e2584b866b9f25eff0b3de207e5b007e6e033b960eef8ff2f8ca0fc\" returns successfully" Dec 12 17:38:38.422580 kubelet[2648]: I1212 17:38:38.422517 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-xh7wk" podStartSLOduration=1.124150335 podStartE2EDuration="3.422501239s" podCreationTimestamp="2025-12-12 17:38:35 +0000 UTC" firstStartedPulling="2025-12-12 17:38:35.530837619 +0000 UTC m=+6.236294818" lastFinishedPulling="2025-12-12 17:38:37.829188523 +0000 UTC m=+8.534645722" observedRunningTime="2025-12-12 17:38:38.422404909 +0000 UTC m=+9.127862108" watchObservedRunningTime="2025-12-12 17:38:38.422501239 +0000 UTC m=+9.127958398" Dec 12 17:38:43.419190 sudo[1715]: pam_unix(sudo:session): session closed for user root Dec 12 17:38:43.422533 sshd[1714]: Connection closed by 10.0.0.1 port 38664 Dec 12 17:38:43.423131 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Dec 12 17:38:43.427942 systemd[1]: sshd@6-10.0.0.95:22-10.0.0.1:38664.service: Deactivated successfully. Dec 12 17:38:43.430918 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:38:43.431127 systemd[1]: session-7.scope: Consumed 5.672s CPU time, 215.2M memory peak. Dec 12 17:38:43.432980 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:38:43.434052 systemd-logind[1479]: Removed session 7. Dec 12 17:38:45.517237 update_engine[1481]: I20251212 17:38:45.517089 1481 update_attempter.cc:509] Updating boot flags... Dec 12 17:38:50.730245 systemd[1]: Created slice kubepods-besteffort-pod2275f7fd_86e2_450d_9819_1040f59f846b.slice - libcontainer container kubepods-besteffort-pod2275f7fd_86e2_450d_9819_1040f59f846b.slice. Dec 12 17:38:50.902642 systemd[1]: Created slice kubepods-besteffort-pod7ad6bf3f_a18e_4244_b385_dec36ac486e9.slice - libcontainer container kubepods-besteffort-pod7ad6bf3f_a18e_4244_b385_dec36ac486e9.slice. Dec 12 17:38:50.903201 kubelet[2648]: I1212 17:38:50.902894 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2275f7fd-86e2-450d-9819-1040f59f846b-tigera-ca-bundle\") pod \"calico-typha-579f47f68f-fgzs2\" (UID: \"2275f7fd-86e2-450d-9819-1040f59f846b\") " pod="calico-system/calico-typha-579f47f68f-fgzs2" Dec 12 17:38:50.903201 kubelet[2648]: I1212 17:38:50.902933 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkj4g\" (UniqueName: \"kubernetes.io/projected/2275f7fd-86e2-450d-9819-1040f59f846b-kube-api-access-dkj4g\") pod \"calico-typha-579f47f68f-fgzs2\" (UID: \"2275f7fd-86e2-450d-9819-1040f59f846b\") " pod="calico-system/calico-typha-579f47f68f-fgzs2" Dec 12 17:38:50.903201 kubelet[2648]: I1212 17:38:50.902953 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2275f7fd-86e2-450d-9819-1040f59f846b-typha-certs\") pod \"calico-typha-579f47f68f-fgzs2\" (UID: \"2275f7fd-86e2-450d-9819-1040f59f846b\") " pod="calico-system/calico-typha-579f47f68f-fgzs2" Dec 12 17:38:51.003573 kubelet[2648]: I1212 17:38:51.003417 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7ad6bf3f-a18e-4244-b385-dec36ac486e9-flexvol-driver-host\") pod \"calico-node-h5d6s\" (UID: \"7ad6bf3f-a18e-4244-b385-dec36ac486e9\") " pod="calico-system/calico-node-h5d6s" Dec 12 17:38:51.003573 kubelet[2648]: I1212 17:38:51.003466 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7ad6bf3f-a18e-4244-b385-dec36ac486e9-policysync\") pod \"calico-node-h5d6s\" (UID: \"7ad6bf3f-a18e-4244-b385-dec36ac486e9\") " pod="calico-system/calico-node-h5d6s" Dec 12 17:38:51.003573 kubelet[2648]: I1212 17:38:51.003482 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7ad6bf3f-a18e-4244-b385-dec36ac486e9-var-run-calico\") pod \"calico-node-h5d6s\" (UID: \"7ad6bf3f-a18e-4244-b385-dec36ac486e9\") " pod="calico-system/calico-node-h5d6s" Dec 12 17:38:51.003573 kubelet[2648]: I1212 17:38:51.003509 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7ad6bf3f-a18e-4244-b385-dec36ac486e9-cni-bin-dir\") pod \"calico-node-h5d6s\" (UID: \"7ad6bf3f-a18e-4244-b385-dec36ac486e9\") " pod="calico-system/calico-node-h5d6s" Dec 12 17:38:51.003573 kubelet[2648]: I1212 17:38:51.003524 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7ad6bf3f-a18e-4244-b385-dec36ac486e9-node-certs\") pod \"calico-node-h5d6s\" (UID: \"7ad6bf3f-a18e-4244-b385-dec36ac486e9\") " pod="calico-system/calico-node-h5d6s" Dec 12 17:38:51.003809 kubelet[2648]: I1212 17:38:51.003541 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ad6bf3f-a18e-4244-b385-dec36ac486e9-var-lib-calico\") pod \"calico-node-h5d6s\" (UID: \"7ad6bf3f-a18e-4244-b385-dec36ac486e9\") " pod="calico-system/calico-node-h5d6s" Dec 12 17:38:51.003809 kubelet[2648]: I1212 17:38:51.003557 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7ad6bf3f-a18e-4244-b385-dec36ac486e9-cni-net-dir\") pod \"calico-node-h5d6s\" (UID: \"7ad6bf3f-a18e-4244-b385-dec36ac486e9\") " pod="calico-system/calico-node-h5d6s" Dec 12 17:38:51.003809 kubelet[2648]: I1212 17:38:51.003571 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ad6bf3f-a18e-4244-b385-dec36ac486e9-lib-modules\") pod \"calico-node-h5d6s\" (UID: \"7ad6bf3f-a18e-4244-b385-dec36ac486e9\") " pod="calico-system/calico-node-h5d6s" Dec 12 17:38:51.003809 kubelet[2648]: I1212 17:38:51.003585 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw9xg\" (UniqueName: \"kubernetes.io/projected/7ad6bf3f-a18e-4244-b385-dec36ac486e9-kube-api-access-tw9xg\") pod \"calico-node-h5d6s\" (UID: \"7ad6bf3f-a18e-4244-b385-dec36ac486e9\") " pod="calico-system/calico-node-h5d6s" Dec 12 17:38:51.003809 kubelet[2648]: I1212 17:38:51.003609 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ad6bf3f-a18e-4244-b385-dec36ac486e9-xtables-lock\") pod \"calico-node-h5d6s\" (UID: \"7ad6bf3f-a18e-4244-b385-dec36ac486e9\") " pod="calico-system/calico-node-h5d6s" Dec 12 17:38:51.003910 kubelet[2648]: I1212 17:38:51.003626 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7ad6bf3f-a18e-4244-b385-dec36ac486e9-cni-log-dir\") pod \"calico-node-h5d6s\" (UID: \"7ad6bf3f-a18e-4244-b385-dec36ac486e9\") " pod="calico-system/calico-node-h5d6s" Dec 12 17:38:51.003910 kubelet[2648]: I1212 17:38:51.003640 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ad6bf3f-a18e-4244-b385-dec36ac486e9-tigera-ca-bundle\") pod \"calico-node-h5d6s\" (UID: \"7ad6bf3f-a18e-4244-b385-dec36ac486e9\") " pod="calico-system/calico-node-h5d6s" Dec 12 17:38:51.035283 containerd[1496]: time="2025-12-12T17:38:51.035229940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-579f47f68f-fgzs2,Uid:2275f7fd-86e2-450d-9819-1040f59f846b,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:51.089289 containerd[1496]: time="2025-12-12T17:38:51.088951844Z" level=info msg="connecting to shim 7ca7ce96be23d686827d0e5615686e2e2745472ca919a1b6b5ef65cc541d97ac" address="unix:///run/containerd/s/a52b58a4a95ae4eb41bfd209aa7b76df1ff942d27fbbb919472f8a7d82a59695" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:51.095363 kubelet[2648]: E1212 17:38:51.095313 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-94p94" podUID="b5dca236-19dc-432c-971f-14a30f71196b" Dec 12 17:38:51.116312 systemd[1]: Started cri-containerd-7ca7ce96be23d686827d0e5615686e2e2745472ca919a1b6b5ef65cc541d97ac.scope - libcontainer container 7ca7ce96be23d686827d0e5615686e2e2745472ca919a1b6b5ef65cc541d97ac. Dec 12 17:38:51.134292 kubelet[2648]: E1212 17:38:51.134258 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.134292 kubelet[2648]: W1212 17:38:51.134281 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.134900 kubelet[2648]: E1212 17:38:51.134879 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.163416 containerd[1496]: time="2025-12-12T17:38:51.163150694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-579f47f68f-fgzs2,Uid:2275f7fd-86e2-450d-9819-1040f59f846b,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ca7ce96be23d686827d0e5615686e2e2745472ca919a1b6b5ef65cc541d97ac\"" Dec 12 17:38:51.175543 containerd[1496]: time="2025-12-12T17:38:51.175295350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:38:51.206453 kubelet[2648]: E1212 17:38:51.206410 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.206453 kubelet[2648]: W1212 17:38:51.206438 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.207074 kubelet[2648]: E1212 17:38:51.206752 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.207074 kubelet[2648]: I1212 17:38:51.206792 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b5dca236-19dc-432c-971f-14a30f71196b-registration-dir\") pod \"csi-node-driver-94p94\" (UID: \"b5dca236-19dc-432c-971f-14a30f71196b\") " pod="calico-system/csi-node-driver-94p94" Dec 12 17:38:51.207637 containerd[1496]: time="2025-12-12T17:38:51.207588576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h5d6s,Uid:7ad6bf3f-a18e-4244-b385-dec36ac486e9,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:51.207810 kubelet[2648]: E1212 17:38:51.207782 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.207810 kubelet[2648]: W1212 17:38:51.207803 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.207878 kubelet[2648]: E1212 17:38:51.207821 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.207878 kubelet[2648]: I1212 17:38:51.207851 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5dca236-19dc-432c-971f-14a30f71196b-kubelet-dir\") pod \"csi-node-driver-94p94\" (UID: \"b5dca236-19dc-432c-971f-14a30f71196b\") " pod="calico-system/csi-node-driver-94p94" Dec 12 17:38:51.209493 kubelet[2648]: E1212 17:38:51.209071 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.209493 kubelet[2648]: W1212 17:38:51.209477 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.209589 kubelet[2648]: E1212 17:38:51.209501 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.209589 kubelet[2648]: I1212 17:38:51.209539 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b5dca236-19dc-432c-971f-14a30f71196b-varrun\") pod \"csi-node-driver-94p94\" (UID: \"b5dca236-19dc-432c-971f-14a30f71196b\") " pod="calico-system/csi-node-driver-94p94" Dec 12 17:38:51.209877 kubelet[2648]: E1212 17:38:51.209797 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.209923 kubelet[2648]: W1212 17:38:51.209879 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.209923 kubelet[2648]: E1212 17:38:51.209897 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.210186 kubelet[2648]: E1212 17:38:51.210168 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.210186 kubelet[2648]: W1212 17:38:51.210182 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.210241 kubelet[2648]: E1212 17:38:51.210194 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.210639 kubelet[2648]: E1212 17:38:51.210427 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.210679 kubelet[2648]: W1212 17:38:51.210639 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.210679 kubelet[2648]: E1212 17:38:51.210651 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.211144 kubelet[2648]: E1212 17:38:51.211083 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.211144 kubelet[2648]: W1212 17:38:51.211100 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.211144 kubelet[2648]: E1212 17:38:51.211137 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.211249 kubelet[2648]: I1212 17:38:51.211191 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwx5j\" (UniqueName: \"kubernetes.io/projected/b5dca236-19dc-432c-971f-14a30f71196b-kube-api-access-nwx5j\") pod \"csi-node-driver-94p94\" (UID: \"b5dca236-19dc-432c-971f-14a30f71196b\") " pod="calico-system/csi-node-driver-94p94" Dec 12 17:38:51.211517 kubelet[2648]: E1212 17:38:51.211337 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.211517 kubelet[2648]: W1212 17:38:51.211351 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.211517 kubelet[2648]: E1212 17:38:51.211362 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.211517 kubelet[2648]: E1212 17:38:51.211516 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.211517 kubelet[2648]: W1212 17:38:51.211524 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.211646 kubelet[2648]: E1212 17:38:51.211533 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.211748 kubelet[2648]: E1212 17:38:51.211694 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.211748 kubelet[2648]: W1212 17:38:51.211709 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.211748 kubelet[2648]: E1212 17:38:51.211721 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.212186 kubelet[2648]: E1212 17:38:51.212166 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.212186 kubelet[2648]: W1212 17:38:51.212184 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.212268 kubelet[2648]: E1212 17:38:51.212196 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.212268 kubelet[2648]: I1212 17:38:51.212222 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b5dca236-19dc-432c-971f-14a30f71196b-socket-dir\") pod \"csi-node-driver-94p94\" (UID: \"b5dca236-19dc-432c-971f-14a30f71196b\") " pod="calico-system/csi-node-driver-94p94" Dec 12 17:38:51.212738 kubelet[2648]: E1212 17:38:51.212710 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.212738 kubelet[2648]: W1212 17:38:51.212732 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.212806 kubelet[2648]: E1212 17:38:51.212748 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.212955 kubelet[2648]: E1212 17:38:51.212942 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.212955 kubelet[2648]: W1212 17:38:51.212955 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.213014 kubelet[2648]: E1212 17:38:51.212964 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.213216 kubelet[2648]: E1212 17:38:51.213169 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.213216 kubelet[2648]: W1212 17:38:51.213184 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.213216 kubelet[2648]: E1212 17:38:51.213195 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.213859 kubelet[2648]: E1212 17:38:51.213626 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.213859 kubelet[2648]: W1212 17:38:51.213644 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.213859 kubelet[2648]: E1212 17:38:51.213659 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.235464 containerd[1496]: time="2025-12-12T17:38:51.235328755Z" level=info msg="connecting to shim d3207cf4c48a10df2e873c3919a75daf1fa589c1c4bb4b40ceebe98fd4314e1b" address="unix:///run/containerd/s/1b07124d9f077595c43d9434964069fbd92e830601e5fcce1b74a5483fa72855" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:51.252230 systemd[1]: Started cri-containerd-d3207cf4c48a10df2e873c3919a75daf1fa589c1c4bb4b40ceebe98fd4314e1b.scope - libcontainer container d3207cf4c48a10df2e873c3919a75daf1fa589c1c4bb4b40ceebe98fd4314e1b. Dec 12 17:38:51.302952 containerd[1496]: time="2025-12-12T17:38:51.302812282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h5d6s,Uid:7ad6bf3f-a18e-4244-b385-dec36ac486e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3207cf4c48a10df2e873c3919a75daf1fa589c1c4bb4b40ceebe98fd4314e1b\"" Dec 12 17:38:51.314700 kubelet[2648]: E1212 17:38:51.314659 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.314700 kubelet[2648]: W1212 17:38:51.314690 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.314873 kubelet[2648]: E1212 17:38:51.314711 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.314909 kubelet[2648]: E1212 17:38:51.314894 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.314939 kubelet[2648]: W1212 17:38:51.314917 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.314939 kubelet[2648]: E1212 17:38:51.314927 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.315124 kubelet[2648]: E1212 17:38:51.315112 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.315158 kubelet[2648]: W1212 17:38:51.315125 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.315158 kubelet[2648]: E1212 17:38:51.315134 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.315345 kubelet[2648]: E1212 17:38:51.315330 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.315345 kubelet[2648]: W1212 17:38:51.315343 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.315401 kubelet[2648]: E1212 17:38:51.315352 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.315545 kubelet[2648]: E1212 17:38:51.315528 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.315570 kubelet[2648]: W1212 17:38:51.315550 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.315570 kubelet[2648]: E1212 17:38:51.315559 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.315801 kubelet[2648]: E1212 17:38:51.315768 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.315801 kubelet[2648]: W1212 17:38:51.315794 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.315860 kubelet[2648]: E1212 17:38:51.315805 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.316009 kubelet[2648]: E1212 17:38:51.315996 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.316048 kubelet[2648]: W1212 17:38:51.316008 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.316048 kubelet[2648]: E1212 17:38:51.316025 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.316218 kubelet[2648]: E1212 17:38:51.316206 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.316218 kubelet[2648]: W1212 17:38:51.316218 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.316264 kubelet[2648]: E1212 17:38:51.316227 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.316381 kubelet[2648]: E1212 17:38:51.316370 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.316414 kubelet[2648]: W1212 17:38:51.316381 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.316414 kubelet[2648]: E1212 17:38:51.316399 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.316540 kubelet[2648]: E1212 17:38:51.316529 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.316540 kubelet[2648]: W1212 17:38:51.316540 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.316589 kubelet[2648]: E1212 17:38:51.316556 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.316702 kubelet[2648]: E1212 17:38:51.316692 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.316730 kubelet[2648]: W1212 17:38:51.316702 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.316730 kubelet[2648]: E1212 17:38:51.316718 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.316859 kubelet[2648]: E1212 17:38:51.316849 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.316859 kubelet[2648]: W1212 17:38:51.316859 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.316907 kubelet[2648]: E1212 17:38:51.316875 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.322694 kubelet[2648]: E1212 17:38:51.322672 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.322694 kubelet[2648]: W1212 17:38:51.322689 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.322804 kubelet[2648]: E1212 17:38:51.322703 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.322924 kubelet[2648]: E1212 17:38:51.322904 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.322924 kubelet[2648]: W1212 17:38:51.322917 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.323264 kubelet[2648]: E1212 17:38:51.322927 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.323264 kubelet[2648]: E1212 17:38:51.323144 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.323264 kubelet[2648]: W1212 17:38:51.323155 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.323264 kubelet[2648]: E1212 17:38:51.323165 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.323430 kubelet[2648]: E1212 17:38:51.323317 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.323430 kubelet[2648]: W1212 17:38:51.323326 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.323430 kubelet[2648]: E1212 17:38:51.323333 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.323489 kubelet[2648]: E1212 17:38:51.323445 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.323489 kubelet[2648]: W1212 17:38:51.323455 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.323489 kubelet[2648]: E1212 17:38:51.323462 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.323616 kubelet[2648]: E1212 17:38:51.323606 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.323616 kubelet[2648]: W1212 17:38:51.323615 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.323665 kubelet[2648]: E1212 17:38:51.323623 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.323782 kubelet[2648]: E1212 17:38:51.323771 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.323811 kubelet[2648]: W1212 17:38:51.323783 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.323811 kubelet[2648]: E1212 17:38:51.323791 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.323952 kubelet[2648]: E1212 17:38:51.323941 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.323952 kubelet[2648]: W1212 17:38:51.323952 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.323997 kubelet[2648]: E1212 17:38:51.323960 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.324148 kubelet[2648]: E1212 17:38:51.324136 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.324148 kubelet[2648]: W1212 17:38:51.324148 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.324201 kubelet[2648]: E1212 17:38:51.324157 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.325687 kubelet[2648]: E1212 17:38:51.325661 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.325687 kubelet[2648]: W1212 17:38:51.325683 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.325784 kubelet[2648]: E1212 17:38:51.325701 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.326107 kubelet[2648]: E1212 17:38:51.326084 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.326107 kubelet[2648]: W1212 17:38:51.326106 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.326165 kubelet[2648]: E1212 17:38:51.326119 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.326698 kubelet[2648]: E1212 17:38:51.326647 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.326698 kubelet[2648]: W1212 17:38:51.326665 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.326698 kubelet[2648]: E1212 17:38:51.326680 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.329140 kubelet[2648]: E1212 17:38:51.329114 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.329140 kubelet[2648]: W1212 17:38:51.329140 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.329236 kubelet[2648]: E1212 17:38:51.329156 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:51.334961 kubelet[2648]: E1212 17:38:51.334902 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:51.334961 kubelet[2648]: W1212 17:38:51.334925 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:51.334961 kubelet[2648]: E1212 17:38:51.334943 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:52.074843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2489006744.mount: Deactivated successfully. Dec 12 17:38:52.531345 containerd[1496]: time="2025-12-12T17:38:52.531186951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:52.534081 containerd[1496]: time="2025-12-12T17:38:52.533878410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Dec 12 17:38:52.534918 containerd[1496]: time="2025-12-12T17:38:52.534863461Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:52.538358 containerd[1496]: time="2025-12-12T17:38:52.538308239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:52.538861 containerd[1496]: time="2025-12-12T17:38:52.538823065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.363484713s" Dec 12 17:38:52.538861 containerd[1496]: time="2025-12-12T17:38:52.538858787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:38:52.542373 containerd[1496]: time="2025-12-12T17:38:52.542316926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:38:52.559020 containerd[1496]: time="2025-12-12T17:38:52.558956545Z" level=info msg="CreateContainer within sandbox \"7ca7ce96be23d686827d0e5615686e2e2745472ca919a1b6b5ef65cc541d97ac\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:38:52.573085 containerd[1496]: time="2025-12-12T17:38:52.573021391Z" level=info msg="Container 32f83058b5183bc01962f5b314950b2e9b27ec97711473da1f50e887a816d4d1: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:52.579297 containerd[1496]: time="2025-12-12T17:38:52.579247832Z" level=info msg="CreateContainer within sandbox \"7ca7ce96be23d686827d0e5615686e2e2745472ca919a1b6b5ef65cc541d97ac\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"32f83058b5183bc01962f5b314950b2e9b27ec97711473da1f50e887a816d4d1\"" Dec 12 17:38:52.580797 containerd[1496]: time="2025-12-12T17:38:52.580766551Z" level=info msg="StartContainer for \"32f83058b5183bc01962f5b314950b2e9b27ec97711473da1f50e887a816d4d1\"" Dec 12 17:38:52.591776 containerd[1496]: time="2025-12-12T17:38:52.591736437Z" level=info msg="connecting to shim 32f83058b5183bc01962f5b314950b2e9b27ec97711473da1f50e887a816d4d1" address="unix:///run/containerd/s/a52b58a4a95ae4eb41bfd209aa7b76df1ff942d27fbbb919472f8a7d82a59695" protocol=ttrpc version=3 Dec 12 17:38:52.617271 systemd[1]: Started cri-containerd-32f83058b5183bc01962f5b314950b2e9b27ec97711473da1f50e887a816d4d1.scope - libcontainer container 32f83058b5183bc01962f5b314950b2e9b27ec97711473da1f50e887a816d4d1. Dec 12 17:38:52.654290 containerd[1496]: time="2025-12-12T17:38:52.654231624Z" level=info msg="StartContainer for \"32f83058b5183bc01962f5b314950b2e9b27ec97711473da1f50e887a816d4d1\" returns successfully" Dec 12 17:38:53.383367 kubelet[2648]: E1212 17:38:53.383321 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-94p94" podUID="b5dca236-19dc-432c-971f-14a30f71196b" Dec 12 17:38:53.439030 containerd[1496]: time="2025-12-12T17:38:53.438983512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:53.439665 containerd[1496]: time="2025-12-12T17:38:53.439471856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Dec 12 17:38:53.440429 containerd[1496]: time="2025-12-12T17:38:53.440391461Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:53.442400 containerd[1496]: time="2025-12-12T17:38:53.442360278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:53.443034 containerd[1496]: time="2025-12-12T17:38:53.443006590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 900.633382ms" Dec 12 17:38:53.443141 containerd[1496]: time="2025-12-12T17:38:53.443124516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:38:53.451291 containerd[1496]: time="2025-12-12T17:38:53.451253557Z" level=info msg="CreateContainer within sandbox \"d3207cf4c48a10df2e873c3919a75daf1fa589c1c4bb4b40ceebe98fd4314e1b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:38:53.463869 containerd[1496]: time="2025-12-12T17:38:53.463806697Z" level=info msg="Container 6c953cb39ddefc8b061fe31613c986ca24bf36475a642e99df02b16c155ee566: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:53.476353 containerd[1496]: time="2025-12-12T17:38:53.476291273Z" level=info msg="CreateContainer within sandbox \"d3207cf4c48a10df2e873c3919a75daf1fa589c1c4bb4b40ceebe98fd4314e1b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6c953cb39ddefc8b061fe31613c986ca24bf36475a642e99df02b16c155ee566\"" Dec 12 17:38:53.476986 containerd[1496]: time="2025-12-12T17:38:53.476955626Z" level=info msg="StartContainer for \"6c953cb39ddefc8b061fe31613c986ca24bf36475a642e99df02b16c155ee566\"" Dec 12 17:38:53.480281 containerd[1496]: time="2025-12-12T17:38:53.480228868Z" level=info msg="connecting to shim 6c953cb39ddefc8b061fe31613c986ca24bf36475a642e99df02b16c155ee566" address="unix:///run/containerd/s/1b07124d9f077595c43d9434964069fbd92e830601e5fcce1b74a5483fa72855" protocol=ttrpc version=3 Dec 12 17:38:53.483830 kubelet[2648]: I1212 17:38:53.483769 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-579f47f68f-fgzs2" podStartSLOduration=2.112990542 podStartE2EDuration="3.483737441s" podCreationTimestamp="2025-12-12 17:38:50 +0000 UTC" firstStartedPulling="2025-12-12 17:38:51.170536533 +0000 UTC m=+21.875993732" lastFinishedPulling="2025-12-12 17:38:52.541283432 +0000 UTC m=+23.246740631" observedRunningTime="2025-12-12 17:38:53.481275839 +0000 UTC m=+24.186733078" watchObservedRunningTime="2025-12-12 17:38:53.483737441 +0000 UTC m=+24.189194640" Dec 12 17:38:53.506300 systemd[1]: Started cri-containerd-6c953cb39ddefc8b061fe31613c986ca24bf36475a642e99df02b16c155ee566.scope - libcontainer container 6c953cb39ddefc8b061fe31613c986ca24bf36475a642e99df02b16c155ee566. Dec 12 17:38:53.524584 kubelet[2648]: E1212 17:38:53.524526 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.524584 kubelet[2648]: W1212 17:38:53.524566 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.524584 kubelet[2648]: E1212 17:38:53.524589 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.524817 kubelet[2648]: E1212 17:38:53.524797 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.528263 kubelet[2648]: W1212 17:38:53.524809 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.528263 kubelet[2648]: E1212 17:38:53.528260 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.528483 kubelet[2648]: E1212 17:38:53.528469 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.528483 kubelet[2648]: W1212 17:38:53.528481 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.528543 kubelet[2648]: E1212 17:38:53.528491 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.528665 kubelet[2648]: E1212 17:38:53.528652 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.528665 kubelet[2648]: W1212 17:38:53.528663 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.528714 kubelet[2648]: E1212 17:38:53.528672 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.528857 kubelet[2648]: E1212 17:38:53.528845 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.528857 kubelet[2648]: W1212 17:38:53.528855 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.528901 kubelet[2648]: E1212 17:38:53.528863 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.528992 kubelet[2648]: E1212 17:38:53.528981 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.528992 kubelet[2648]: W1212 17:38:53.528991 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.529039 kubelet[2648]: E1212 17:38:53.528999 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.529140 kubelet[2648]: E1212 17:38:53.529129 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.529140 kubelet[2648]: W1212 17:38:53.529139 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.529200 kubelet[2648]: E1212 17:38:53.529148 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.529300 kubelet[2648]: E1212 17:38:53.529288 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.529330 kubelet[2648]: W1212 17:38:53.529301 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.529330 kubelet[2648]: E1212 17:38:53.529310 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.529470 kubelet[2648]: E1212 17:38:53.529458 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.529470 kubelet[2648]: W1212 17:38:53.529468 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.529523 kubelet[2648]: E1212 17:38:53.529476 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.529614 kubelet[2648]: E1212 17:38:53.529603 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.529614 kubelet[2648]: W1212 17:38:53.529613 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.529660 kubelet[2648]: E1212 17:38:53.529620 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.529763 kubelet[2648]: E1212 17:38:53.529752 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.529788 kubelet[2648]: W1212 17:38:53.529764 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.529788 kubelet[2648]: E1212 17:38:53.529773 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.529914 kubelet[2648]: E1212 17:38:53.529904 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.529941 kubelet[2648]: W1212 17:38:53.529914 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.529941 kubelet[2648]: E1212 17:38:53.529922 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.530076 kubelet[2648]: E1212 17:38:53.530054 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.530110 kubelet[2648]: W1212 17:38:53.530087 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.530110 kubelet[2648]: E1212 17:38:53.530099 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.530254 kubelet[2648]: E1212 17:38:53.530242 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.530254 kubelet[2648]: W1212 17:38:53.530251 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.530348 kubelet[2648]: E1212 17:38:53.530260 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.530407 kubelet[2648]: E1212 17:38:53.530394 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.530407 kubelet[2648]: W1212 17:38:53.530404 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.530454 kubelet[2648]: E1212 17:38:53.530412 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.530722 kubelet[2648]: E1212 17:38:53.530693 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.530722 kubelet[2648]: W1212 17:38:53.530708 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.530722 kubelet[2648]: E1212 17:38:53.530717 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.530915 kubelet[2648]: E1212 17:38:53.530901 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.530941 kubelet[2648]: W1212 17:38:53.530914 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.530941 kubelet[2648]: E1212 17:38:53.530924 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.532805 kubelet[2648]: E1212 17:38:53.531186 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.532805 kubelet[2648]: W1212 17:38:53.531208 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.532805 kubelet[2648]: E1212 17:38:53.531239 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.532805 kubelet[2648]: E1212 17:38:53.531413 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.532805 kubelet[2648]: W1212 17:38:53.531421 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.532805 kubelet[2648]: E1212 17:38:53.531429 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.532805 kubelet[2648]: E1212 17:38:53.531591 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.532805 kubelet[2648]: W1212 17:38:53.531599 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.532805 kubelet[2648]: E1212 17:38:53.531608 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.532805 kubelet[2648]: E1212 17:38:53.531882 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.533097 kubelet[2648]: W1212 17:38:53.531893 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.533097 kubelet[2648]: E1212 17:38:53.531904 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.533097 kubelet[2648]: E1212 17:38:53.532109 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.533097 kubelet[2648]: W1212 17:38:53.532117 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.533097 kubelet[2648]: E1212 17:38:53.532127 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.533097 kubelet[2648]: E1212 17:38:53.532283 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.533097 kubelet[2648]: W1212 17:38:53.532290 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.533097 kubelet[2648]: E1212 17:38:53.532298 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.533097 kubelet[2648]: E1212 17:38:53.532442 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.533097 kubelet[2648]: W1212 17:38:53.532449 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.533321 kubelet[2648]: E1212 17:38:53.532456 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.533321 kubelet[2648]: E1212 17:38:53.532582 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.533321 kubelet[2648]: W1212 17:38:53.532590 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.533321 kubelet[2648]: E1212 17:38:53.532597 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.533321 kubelet[2648]: E1212 17:38:53.532884 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.533321 kubelet[2648]: W1212 17:38:53.532898 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.533321 kubelet[2648]: E1212 17:38:53.532910 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.533321 kubelet[2648]: E1212 17:38:53.533040 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.533321 kubelet[2648]: W1212 17:38:53.533047 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.533321 kubelet[2648]: E1212 17:38:53.533056 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.533691 kubelet[2648]: E1212 17:38:53.533670 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.533691 kubelet[2648]: W1212 17:38:53.533688 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.533743 kubelet[2648]: E1212 17:38:53.533700 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.533971 kubelet[2648]: E1212 17:38:53.533953 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.533997 kubelet[2648]: W1212 17:38:53.533971 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.533997 kubelet[2648]: E1212 17:38:53.533984 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.534201 kubelet[2648]: E1212 17:38:53.534187 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.534233 kubelet[2648]: W1212 17:38:53.534202 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.534233 kubelet[2648]: E1212 17:38:53.534214 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.534406 kubelet[2648]: E1212 17:38:53.534395 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.534428 kubelet[2648]: W1212 17:38:53.534406 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.534428 kubelet[2648]: E1212 17:38:53.534415 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.534636 kubelet[2648]: E1212 17:38:53.534622 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.534663 kubelet[2648]: W1212 17:38:53.534635 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.534663 kubelet[2648]: E1212 17:38:53.534646 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.534808 kubelet[2648]: E1212 17:38:53.534795 2648 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:53.534833 kubelet[2648]: W1212 17:38:53.534807 2648 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:53.534833 kubelet[2648]: E1212 17:38:53.534816 2648 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:53.581249 containerd[1496]: time="2025-12-12T17:38:53.581143290Z" level=info msg="StartContainer for \"6c953cb39ddefc8b061fe31613c986ca24bf36475a642e99df02b16c155ee566\" returns successfully" Dec 12 17:38:53.591945 systemd[1]: cri-containerd-6c953cb39ddefc8b061fe31613c986ca24bf36475a642e99df02b16c155ee566.scope: Deactivated successfully. Dec 12 17:38:53.641910 containerd[1496]: time="2025-12-12T17:38:53.641769563Z" level=info msg="received container exit event container_id:\"6c953cb39ddefc8b061fe31613c986ca24bf36475a642e99df02b16c155ee566\" id:\"6c953cb39ddefc8b061fe31613c986ca24bf36475a642e99df02b16c155ee566\" pid:3301 exited_at:{seconds:1765561133 nanos:618753907}" Dec 12 17:38:53.677453 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c953cb39ddefc8b061fe31613c986ca24bf36475a642e99df02b16c155ee566-rootfs.mount: Deactivated successfully. Dec 12 17:38:54.472099 kubelet[2648]: I1212 17:38:54.472054 2648 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:38:54.474300 containerd[1496]: time="2025-12-12T17:38:54.473199369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:38:55.383331 kubelet[2648]: E1212 17:38:55.383227 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-94p94" podUID="b5dca236-19dc-432c-971f-14a30f71196b" Dec 12 17:38:57.388905 kubelet[2648]: E1212 17:38:57.388846 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-94p94" podUID="b5dca236-19dc-432c-971f-14a30f71196b" Dec 12 17:38:57.988499 containerd[1496]: time="2025-12-12T17:38:57.988452845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:57.989215 containerd[1496]: time="2025-12-12T17:38:57.989048390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 12 17:38:57.989974 containerd[1496]: time="2025-12-12T17:38:57.989928906Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:57.992402 containerd[1496]: time="2025-12-12T17:38:57.992131638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:57.992796 containerd[1496]: time="2025-12-12T17:38:57.992771305Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.519523734s" Dec 12 17:38:57.992892 containerd[1496]: time="2025-12-12T17:38:57.992875349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:38:57.997492 containerd[1496]: time="2025-12-12T17:38:57.997357336Z" level=info msg="CreateContainer within sandbox \"d3207cf4c48a10df2e873c3919a75daf1fa589c1c4bb4b40ceebe98fd4314e1b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:38:58.004335 containerd[1496]: time="2025-12-12T17:38:58.004292699Z" level=info msg="Container 960ee1f7eadcd59619b0b53e3934b11ab3960d65b48f0a073fe5c0819a8b0dfa: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:58.016654 containerd[1496]: time="2025-12-12T17:38:58.016600232Z" level=info msg="CreateContainer within sandbox \"d3207cf4c48a10df2e873c3919a75daf1fa589c1c4bb4b40ceebe98fd4314e1b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"960ee1f7eadcd59619b0b53e3934b11ab3960d65b48f0a073fe5c0819a8b0dfa\"" Dec 12 17:38:58.017229 containerd[1496]: time="2025-12-12T17:38:58.017201936Z" level=info msg="StartContainer for \"960ee1f7eadcd59619b0b53e3934b11ab3960d65b48f0a073fe5c0819a8b0dfa\"" Dec 12 17:38:58.019387 containerd[1496]: time="2025-12-12T17:38:58.019359742Z" level=info msg="connecting to shim 960ee1f7eadcd59619b0b53e3934b11ab3960d65b48f0a073fe5c0819a8b0dfa" address="unix:///run/containerd/s/1b07124d9f077595c43d9434964069fbd92e830601e5fcce1b74a5483fa72855" protocol=ttrpc version=3 Dec 12 17:38:58.064288 systemd[1]: Started cri-containerd-960ee1f7eadcd59619b0b53e3934b11ab3960d65b48f0a073fe5c0819a8b0dfa.scope - libcontainer container 960ee1f7eadcd59619b0b53e3934b11ab3960d65b48f0a073fe5c0819a8b0dfa. Dec 12 17:38:58.250287 containerd[1496]: time="2025-12-12T17:38:58.250170176Z" level=info msg="StartContainer for \"960ee1f7eadcd59619b0b53e3934b11ab3960d65b48f0a073fe5c0819a8b0dfa\" returns successfully" Dec 12 17:38:58.706413 systemd[1]: cri-containerd-960ee1f7eadcd59619b0b53e3934b11ab3960d65b48f0a073fe5c0819a8b0dfa.scope: Deactivated successfully. Dec 12 17:38:58.706774 systemd[1]: cri-containerd-960ee1f7eadcd59619b0b53e3934b11ab3960d65b48f0a073fe5c0819a8b0dfa.scope: Consumed 465ms CPU time, 179.7M memory peak, 4K read from disk, 165.9M written to disk. Dec 12 17:38:58.711726 containerd[1496]: time="2025-12-12T17:38:58.707485872Z" level=info msg="received container exit event container_id:\"960ee1f7eadcd59619b0b53e3934b11ab3960d65b48f0a073fe5c0819a8b0dfa\" id:\"960ee1f7eadcd59619b0b53e3934b11ab3960d65b48f0a073fe5c0819a8b0dfa\" pid:3397 exited_at:{seconds:1765561138 nanos:707281584}" Dec 12 17:38:58.731210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-960ee1f7eadcd59619b0b53e3934b11ab3960d65b48f0a073fe5c0819a8b0dfa-rootfs.mount: Deactivated successfully. Dec 12 17:38:58.747729 kubelet[2648]: I1212 17:38:58.747672 2648 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:38:58.830183 systemd[1]: Created slice kubepods-burstable-poded7aeb60_0078_4fe4_a5c3_76fa0aebc9f4.slice - libcontainer container kubepods-burstable-poded7aeb60_0078_4fe4_a5c3_76fa0aebc9f4.slice. Dec 12 17:38:58.843660 systemd[1]: Created slice kubepods-burstable-podb84a83ed_bde3_4881_9529_0aa87d97db76.slice - libcontainer container kubepods-burstable-podb84a83ed_bde3_4881_9529_0aa87d97db76.slice. Dec 12 17:38:58.858660 systemd[1]: Created slice kubepods-besteffort-pod606aaa1a_2e48_43ef_9bd5_c2c13f9aa6a2.slice - libcontainer container kubepods-besteffort-pod606aaa1a_2e48_43ef_9bd5_c2c13f9aa6a2.slice. Dec 12 17:38:58.865690 systemd[1]: Created slice kubepods-besteffort-pod29fa9e34_8cce_4225_b8c3_b537c398886e.slice - libcontainer container kubepods-besteffort-pod29fa9e34_8cce_4225_b8c3_b537c398886e.slice. Dec 12 17:38:58.873000 systemd[1]: Created slice kubepods-besteffort-podf1bf29b9_0dac_4c5d_b6f7_02c56a288948.slice - libcontainer container kubepods-besteffort-podf1bf29b9_0dac_4c5d_b6f7_02c56a288948.slice. Dec 12 17:38:58.879416 systemd[1]: Created slice kubepods-besteffort-podd4983885_2ae5_436d_9a3a_ccebc1e24705.slice - libcontainer container kubepods-besteffort-podd4983885_2ae5_436d_9a3a_ccebc1e24705.slice. Dec 12 17:38:58.879707 kubelet[2648]: I1212 17:38:58.879646 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4983885-2ae5-436d-9a3a-ccebc1e24705-calico-apiserver-certs\") pod \"calico-apiserver-776c8864b9-qgx9p\" (UID: \"d4983885-2ae5-436d-9a3a-ccebc1e24705\") " pod="calico-apiserver/calico-apiserver-776c8864b9-qgx9p" Dec 12 17:38:58.879707 kubelet[2648]: I1212 17:38:58.879682 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc7t8\" (UniqueName: \"kubernetes.io/projected/d4983885-2ae5-436d-9a3a-ccebc1e24705-kube-api-access-nc7t8\") pod \"calico-apiserver-776c8864b9-qgx9p\" (UID: \"d4983885-2ae5-436d-9a3a-ccebc1e24705\") " pod="calico-apiserver/calico-apiserver-776c8864b9-qgx9p" Dec 12 17:38:58.879707 kubelet[2648]: I1212 17:38:58.879699 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcjg4\" (UniqueName: \"kubernetes.io/projected/68d76035-3b6d-409d-b705-88ad3c12dd12-kube-api-access-xcjg4\") pod \"goldmane-666569f655-p5kw2\" (UID: \"68d76035-3b6d-409d-b705-88ad3c12dd12\") " pod="calico-system/goldmane-666569f655-p5kw2" Dec 12 17:38:58.879798 kubelet[2648]: I1212 17:38:58.879747 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6xgh\" (UniqueName: \"kubernetes.io/projected/606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2-kube-api-access-k6xgh\") pod \"calico-kube-controllers-6b4f66ff58-m8mvt\" (UID: \"606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2\") " pod="calico-system/calico-kube-controllers-6b4f66ff58-m8mvt" Dec 12 17:38:58.879798 kubelet[2648]: I1212 17:38:58.879792 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6shff\" (UniqueName: \"kubernetes.io/projected/29fa9e34-8cce-4225-b8c3-b537c398886e-kube-api-access-6shff\") pod \"calico-apiserver-776c8864b9-2npcx\" (UID: \"29fa9e34-8cce-4225-b8c3-b537c398886e\") " pod="calico-apiserver/calico-apiserver-776c8864b9-2npcx" Dec 12 17:38:58.879841 kubelet[2648]: I1212 17:38:58.879832 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68d76035-3b6d-409d-b705-88ad3c12dd12-config\") pod \"goldmane-666569f655-p5kw2\" (UID: \"68d76035-3b6d-409d-b705-88ad3c12dd12\") " pod="calico-system/goldmane-666569f655-p5kw2" Dec 12 17:38:58.879868 kubelet[2648]: I1212 17:38:58.879851 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/68d76035-3b6d-409d-b705-88ad3c12dd12-goldmane-key-pair\") pod \"goldmane-666569f655-p5kw2\" (UID: \"68d76035-3b6d-409d-b705-88ad3c12dd12\") " pod="calico-system/goldmane-666569f655-p5kw2" Dec 12 17:38:58.879897 kubelet[2648]: I1212 17:38:58.879871 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6xfz\" (UniqueName: \"kubernetes.io/projected/f1bf29b9-0dac-4c5d-b6f7-02c56a288948-kube-api-access-x6xfz\") pod \"whisker-658f57547c-2mkhz\" (UID: \"f1bf29b9-0dac-4c5d-b6f7-02c56a288948\") " pod="calico-system/whisker-658f57547c-2mkhz" Dec 12 17:38:58.879922 kubelet[2648]: I1212 17:38:58.879903 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4-config-volume\") pod \"coredns-674b8bbfcf-jkf4h\" (UID: \"ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4\") " pod="kube-system/coredns-674b8bbfcf-jkf4h" Dec 12 17:38:58.879942 kubelet[2648]: I1212 17:38:58.879924 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1bf29b9-0dac-4c5d-b6f7-02c56a288948-whisker-backend-key-pair\") pod \"whisker-658f57547c-2mkhz\" (UID: \"f1bf29b9-0dac-4c5d-b6f7-02c56a288948\") " pod="calico-system/whisker-658f57547c-2mkhz" Dec 12 17:38:58.879942 kubelet[2648]: I1212 17:38:58.879939 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1bf29b9-0dac-4c5d-b6f7-02c56a288948-whisker-ca-bundle\") pod \"whisker-658f57547c-2mkhz\" (UID: \"f1bf29b9-0dac-4c5d-b6f7-02c56a288948\") " pod="calico-system/whisker-658f57547c-2mkhz" Dec 12 17:38:58.879986 kubelet[2648]: I1212 17:38:58.879953 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhfpg\" (UniqueName: \"kubernetes.io/projected/ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4-kube-api-access-lhfpg\") pod \"coredns-674b8bbfcf-jkf4h\" (UID: \"ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4\") " pod="kube-system/coredns-674b8bbfcf-jkf4h" Dec 12 17:38:58.879986 kubelet[2648]: I1212 17:38:58.879980 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68d76035-3b6d-409d-b705-88ad3c12dd12-goldmane-ca-bundle\") pod \"goldmane-666569f655-p5kw2\" (UID: \"68d76035-3b6d-409d-b705-88ad3c12dd12\") " pod="calico-system/goldmane-666569f655-p5kw2" Dec 12 17:38:58.880028 kubelet[2648]: I1212 17:38:58.879998 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/29fa9e34-8cce-4225-b8c3-b537c398886e-calico-apiserver-certs\") pod \"calico-apiserver-776c8864b9-2npcx\" (UID: \"29fa9e34-8cce-4225-b8c3-b537c398886e\") " pod="calico-apiserver/calico-apiserver-776c8864b9-2npcx" Dec 12 17:38:58.880028 kubelet[2648]: I1212 17:38:58.880024 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b84a83ed-bde3-4881-9529-0aa87d97db76-config-volume\") pod \"coredns-674b8bbfcf-pddzw\" (UID: \"b84a83ed-bde3-4881-9529-0aa87d97db76\") " pod="kube-system/coredns-674b8bbfcf-pddzw" Dec 12 17:38:58.880104 kubelet[2648]: I1212 17:38:58.880040 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66wwj\" (UniqueName: \"kubernetes.io/projected/b84a83ed-bde3-4881-9529-0aa87d97db76-kube-api-access-66wwj\") pod \"coredns-674b8bbfcf-pddzw\" (UID: \"b84a83ed-bde3-4881-9529-0aa87d97db76\") " pod="kube-system/coredns-674b8bbfcf-pddzw" Dec 12 17:38:58.880104 kubelet[2648]: I1212 17:38:58.880085 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2-tigera-ca-bundle\") pod \"calico-kube-controllers-6b4f66ff58-m8mvt\" (UID: \"606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2\") " pod="calico-system/calico-kube-controllers-6b4f66ff58-m8mvt" Dec 12 17:38:58.882982 systemd[1]: Created slice kubepods-besteffort-pod68d76035_3b6d_409d_b705_88ad3c12dd12.slice - libcontainer container kubepods-besteffort-pod68d76035_3b6d_409d_b705_88ad3c12dd12.slice. Dec 12 17:38:59.143303 containerd[1496]: time="2025-12-12T17:38:59.142858633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkf4h,Uid:ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:59.150575 containerd[1496]: time="2025-12-12T17:38:59.150148353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pddzw,Uid:b84a83ed-bde3-4881-9529-0aa87d97db76,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:59.165103 containerd[1496]: time="2025-12-12T17:38:59.164777196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4f66ff58-m8mvt,Uid:606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:59.172301 containerd[1496]: time="2025-12-12T17:38:59.172247124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776c8864b9-2npcx,Uid:29fa9e34-8cce-4225-b8c3-b537c398886e,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:38:59.182163 containerd[1496]: time="2025-12-12T17:38:59.182125144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658f57547c-2mkhz,Uid:f1bf29b9-0dac-4c5d-b6f7-02c56a288948,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:59.184866 containerd[1496]: time="2025-12-12T17:38:59.184828408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776c8864b9-qgx9p,Uid:d4983885-2ae5-436d-9a3a-ccebc1e24705,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:38:59.187995 containerd[1496]: time="2025-12-12T17:38:59.187944047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p5kw2,Uid:68d76035-3b6d-409d-b705-88ad3c12dd12,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:59.291988 containerd[1496]: time="2025-12-12T17:38:59.291783362Z" level=error msg="Failed to destroy network for sandbox \"b1a52c6c0b107a0e1ea5de073e8cd2a5f7f543acf5cb830288b30fa1669648a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.299260 containerd[1496]: time="2025-12-12T17:38:59.299206568Z" level=error msg="Failed to destroy network for sandbox \"5e4fba4acb8cccd28efb1cc767583b6a2fd07f6e76722ba331ebff874c0cdce3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.299542 containerd[1496]: time="2025-12-12T17:38:59.299306212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776c8864b9-2npcx,Uid:29fa9e34-8cce-4225-b8c3-b537c398886e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a52c6c0b107a0e1ea5de073e8cd2a5f7f543acf5cb830288b30fa1669648a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.299979 kubelet[2648]: E1212 17:38:59.299938 2648 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a52c6c0b107a0e1ea5de073e8cd2a5f7f543acf5cb830288b30fa1669648a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.300104 kubelet[2648]: E1212 17:38:59.300005 2648 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a52c6c0b107a0e1ea5de073e8cd2a5f7f543acf5cb830288b30fa1669648a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776c8864b9-2npcx" Dec 12 17:38:59.300104 kubelet[2648]: E1212 17:38:59.300026 2648 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1a52c6c0b107a0e1ea5de073e8cd2a5f7f543acf5cb830288b30fa1669648a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776c8864b9-2npcx" Dec 12 17:38:59.301242 kubelet[2648]: E1212 17:38:59.300094 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-776c8864b9-2npcx_calico-apiserver(29fa9e34-8cce-4225-b8c3-b537c398886e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-776c8864b9-2npcx_calico-apiserver(29fa9e34-8cce-4225-b8c3-b537c398886e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1a52c6c0b107a0e1ea5de073e8cd2a5f7f543acf5cb830288b30fa1669648a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-776c8864b9-2npcx" podUID="29fa9e34-8cce-4225-b8c3-b537c398886e" Dec 12 17:38:59.303031 containerd[1496]: time="2025-12-12T17:38:59.302978913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4f66ff58-m8mvt,Uid:606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e4fba4acb8cccd28efb1cc767583b6a2fd07f6e76722ba331ebff874c0cdce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.303641 kubelet[2648]: E1212 17:38:59.303594 2648 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e4fba4acb8cccd28efb1cc767583b6a2fd07f6e76722ba331ebff874c0cdce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.304114 kubelet[2648]: E1212 17:38:59.303966 2648 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e4fba4acb8cccd28efb1cc767583b6a2fd07f6e76722ba331ebff874c0cdce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b4f66ff58-m8mvt" Dec 12 17:38:59.304114 kubelet[2648]: E1212 17:38:59.304003 2648 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e4fba4acb8cccd28efb1cc767583b6a2fd07f6e76722ba331ebff874c0cdce3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b4f66ff58-m8mvt" Dec 12 17:38:59.304114 kubelet[2648]: E1212 17:38:59.304074 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b4f66ff58-m8mvt_calico-system(606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b4f66ff58-m8mvt_calico-system(606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e4fba4acb8cccd28efb1cc767583b6a2fd07f6e76722ba331ebff874c0cdce3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b4f66ff58-m8mvt" podUID="606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2" Dec 12 17:38:59.317229 containerd[1496]: time="2025-12-12T17:38:59.317174659Z" level=error msg="Failed to destroy network for sandbox \"c800462a3fea9115e9a2bb48cad369aabfe53218bf5381b6982a2ed76cc9f94b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.318972 containerd[1496]: time="2025-12-12T17:38:59.318880525Z" level=error msg="Failed to destroy network for sandbox \"6219ed52d324e9d9caa9df749779035da1b3cdec7ec1917f19ce7637cbe9ff94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.320087 containerd[1496]: time="2025-12-12T17:38:59.320032129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkf4h,Uid:ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c800462a3fea9115e9a2bb48cad369aabfe53218bf5381b6982a2ed76cc9f94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.321195 containerd[1496]: time="2025-12-12T17:38:59.321167773Z" level=error msg="Failed to destroy network for sandbox \"94b1c86c009245d67c12f48010b2cd7960836a62a71b320e47f7e75dcd3b2914\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.321315 kubelet[2648]: E1212 17:38:59.321201 2648 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c800462a3fea9115e9a2bb48cad369aabfe53218bf5381b6982a2ed76cc9f94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.321315 kubelet[2648]: E1212 17:38:59.321255 2648 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c800462a3fea9115e9a2bb48cad369aabfe53218bf5381b6982a2ed76cc9f94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jkf4h" Dec 12 17:38:59.321315 kubelet[2648]: E1212 17:38:59.321294 2648 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c800462a3fea9115e9a2bb48cad369aabfe53218bf5381b6982a2ed76cc9f94b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jkf4h" Dec 12 17:38:59.321417 kubelet[2648]: E1212 17:38:59.321344 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jkf4h_kube-system(ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jkf4h_kube-system(ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c800462a3fea9115e9a2bb48cad369aabfe53218bf5381b6982a2ed76cc9f94b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jkf4h" podUID="ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4" Dec 12 17:38:59.322059 containerd[1496]: time="2025-12-12T17:38:59.322029486Z" level=error msg="Failed to destroy network for sandbox \"e34ce8497df99932f72f425d5e108a4b2963e3eaaffbe7f22cce107ed3c922f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.322957 containerd[1496]: time="2025-12-12T17:38:59.322912040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776c8864b9-qgx9p,Uid:d4983885-2ae5-436d-9a3a-ccebc1e24705,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6219ed52d324e9d9caa9df749779035da1b3cdec7ec1917f19ce7637cbe9ff94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.323794 kubelet[2648]: E1212 17:38:59.323339 2648 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6219ed52d324e9d9caa9df749779035da1b3cdec7ec1917f19ce7637cbe9ff94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.323794 kubelet[2648]: E1212 17:38:59.323410 2648 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6219ed52d324e9d9caa9df749779035da1b3cdec7ec1917f19ce7637cbe9ff94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776c8864b9-qgx9p" Dec 12 17:38:59.323794 kubelet[2648]: E1212 17:38:59.323428 2648 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6219ed52d324e9d9caa9df749779035da1b3cdec7ec1917f19ce7637cbe9ff94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-776c8864b9-qgx9p" Dec 12 17:38:59.323934 kubelet[2648]: E1212 17:38:59.323467 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-776c8864b9-qgx9p_calico-apiserver(d4983885-2ae5-436d-9a3a-ccebc1e24705)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-776c8864b9-qgx9p_calico-apiserver(d4983885-2ae5-436d-9a3a-ccebc1e24705)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6219ed52d324e9d9caa9df749779035da1b3cdec7ec1917f19ce7637cbe9ff94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-776c8864b9-qgx9p" podUID="d4983885-2ae5-436d-9a3a-ccebc1e24705" Dec 12 17:38:59.325410 containerd[1496]: time="2025-12-12T17:38:59.325258050Z" level=error msg="Failed to destroy network for sandbox \"3ca8d670f7e6061a687d81dfc63701cfa0196f24c2040db19eb389d576e1bf47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.325804 containerd[1496]: time="2025-12-12T17:38:59.325568542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pddzw,Uid:b84a83ed-bde3-4881-9529-0aa87d97db76,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b1c86c009245d67c12f48010b2cd7960836a62a71b320e47f7e75dcd3b2914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.325864 kubelet[2648]: E1212 17:38:59.325775 2648 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b1c86c009245d67c12f48010b2cd7960836a62a71b320e47f7e75dcd3b2914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.325864 kubelet[2648]: E1212 17:38:59.325816 2648 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b1c86c009245d67c12f48010b2cd7960836a62a71b320e47f7e75dcd3b2914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pddzw" Dec 12 17:38:59.325864 kubelet[2648]: E1212 17:38:59.325833 2648 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b1c86c009245d67c12f48010b2cd7960836a62a71b320e47f7e75dcd3b2914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pddzw" Dec 12 17:38:59.325942 kubelet[2648]: E1212 17:38:59.325869 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pddzw_kube-system(b84a83ed-bde3-4881-9529-0aa87d97db76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pddzw_kube-system(b84a83ed-bde3-4881-9529-0aa87d97db76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94b1c86c009245d67c12f48010b2cd7960836a62a71b320e47f7e75dcd3b2914\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pddzw" podUID="b84a83ed-bde3-4881-9529-0aa87d97db76" Dec 12 17:38:59.327534 containerd[1496]: time="2025-12-12T17:38:59.327484456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658f57547c-2mkhz,Uid:f1bf29b9-0dac-4c5d-b6f7-02c56a288948,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e34ce8497df99932f72f425d5e108a4b2963e3eaaffbe7f22cce107ed3c922f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.328005 kubelet[2648]: E1212 17:38:59.327748 2648 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e34ce8497df99932f72f425d5e108a4b2963e3eaaffbe7f22cce107ed3c922f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.328005 kubelet[2648]: E1212 17:38:59.327791 2648 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e34ce8497df99932f72f425d5e108a4b2963e3eaaffbe7f22cce107ed3c922f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658f57547c-2mkhz" Dec 12 17:38:59.328005 kubelet[2648]: E1212 17:38:59.327808 2648 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e34ce8497df99932f72f425d5e108a4b2963e3eaaffbe7f22cce107ed3c922f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-658f57547c-2mkhz" Dec 12 17:38:59.328140 kubelet[2648]: E1212 17:38:59.327839 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-658f57547c-2mkhz_calico-system(f1bf29b9-0dac-4c5d-b6f7-02c56a288948)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-658f57547c-2mkhz_calico-system(f1bf29b9-0dac-4c5d-b6f7-02c56a288948)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e34ce8497df99932f72f425d5e108a4b2963e3eaaffbe7f22cce107ed3c922f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-658f57547c-2mkhz" podUID="f1bf29b9-0dac-4c5d-b6f7-02c56a288948" Dec 12 17:38:59.329014 containerd[1496]: time="2025-12-12T17:38:59.328251365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p5kw2,Uid:68d76035-3b6d-409d-b705-88ad3c12dd12,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ca8d670f7e6061a687d81dfc63701cfa0196f24c2040db19eb389d576e1bf47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.329134 kubelet[2648]: E1212 17:38:59.328528 2648 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ca8d670f7e6061a687d81dfc63701cfa0196f24c2040db19eb389d576e1bf47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.329134 kubelet[2648]: E1212 17:38:59.328566 2648 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ca8d670f7e6061a687d81dfc63701cfa0196f24c2040db19eb389d576e1bf47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-p5kw2" Dec 12 17:38:59.329134 kubelet[2648]: E1212 17:38:59.328750 2648 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ca8d670f7e6061a687d81dfc63701cfa0196f24c2040db19eb389d576e1bf47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-p5kw2" Dec 12 17:38:59.329265 kubelet[2648]: E1212 17:38:59.328886 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-p5kw2_calico-system(68d76035-3b6d-409d-b705-88ad3c12dd12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-p5kw2_calico-system(68d76035-3b6d-409d-b705-88ad3c12dd12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ca8d670f7e6061a687d81dfc63701cfa0196f24c2040db19eb389d576e1bf47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-p5kw2" podUID="68d76035-3b6d-409d-b705-88ad3c12dd12" Dec 12 17:38:59.389762 systemd[1]: Created slice kubepods-besteffort-podb5dca236_19dc_432c_971f_14a30f71196b.slice - libcontainer container kubepods-besteffort-podb5dca236_19dc_432c_971f_14a30f71196b.slice. Dec 12 17:38:59.392188 containerd[1496]: time="2025-12-12T17:38:59.392130423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-94p94,Uid:b5dca236-19dc-432c-971f-14a30f71196b,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:59.448173 containerd[1496]: time="2025-12-12T17:38:59.448044894Z" level=error msg="Failed to destroy network for sandbox \"8c0dcfc9b691912065361a656bb027f68c235422059951042bd70d4129fc0643\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.450164 containerd[1496]: time="2025-12-12T17:38:59.450112774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-94p94,Uid:b5dca236-19dc-432c-971f-14a30f71196b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c0dcfc9b691912065361a656bb027f68c235422059951042bd70d4129fc0643\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.450600 kubelet[2648]: E1212 17:38:59.450470 2648 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c0dcfc9b691912065361a656bb027f68c235422059951042bd70d4129fc0643\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:59.450600 kubelet[2648]: E1212 17:38:59.450533 2648 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c0dcfc9b691912065361a656bb027f68c235422059951042bd70d4129fc0643\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-94p94" Dec 12 17:38:59.450600 kubelet[2648]: E1212 17:38:59.450552 2648 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c0dcfc9b691912065361a656bb027f68c235422059951042bd70d4129fc0643\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-94p94" Dec 12 17:38:59.450728 kubelet[2648]: E1212 17:38:59.450596 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-94p94_calico-system(b5dca236-19dc-432c-971f-14a30f71196b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-94p94_calico-system(b5dca236-19dc-432c-971f-14a30f71196b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c0dcfc9b691912065361a656bb027f68c235422059951042bd70d4129fc0643\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-94p94" podUID="b5dca236-19dc-432c-971f-14a30f71196b" Dec 12 17:38:59.492619 containerd[1496]: time="2025-12-12T17:38:59.492573487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:39:00.005966 systemd[1]: run-netns-cni\x2db1dc3be2\x2d2331\x2d8fa8\x2ddc08\x2d27632e45a637.mount: Deactivated successfully. Dec 12 17:39:00.006085 systemd[1]: run-netns-cni\x2df315d965\x2d0e33\x2db9d8\x2db603\x2dfa6dcb77e0c3.mount: Deactivated successfully. Dec 12 17:39:02.497004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231319426.mount: Deactivated successfully. Dec 12 17:39:02.788320 containerd[1496]: time="2025-12-12T17:39:02.788176770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:39:02.788876 containerd[1496]: time="2025-12-12T17:39:02.788848713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 12 17:39:02.789795 containerd[1496]: time="2025-12-12T17:39:02.789764545Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:39:02.791945 containerd[1496]: time="2025-12-12T17:39:02.791888818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:39:02.793014 containerd[1496]: time="2025-12-12T17:39:02.792972655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.300355847s" Dec 12 17:39:02.793014 containerd[1496]: time="2025-12-12T17:39:02.793008896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:39:02.810092 containerd[1496]: time="2025-12-12T17:39:02.810027962Z" level=info msg="CreateContainer within sandbox \"d3207cf4c48a10df2e873c3919a75daf1fa589c1c4bb4b40ceebe98fd4314e1b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:39:02.836257 containerd[1496]: time="2025-12-12T17:39:02.836195623Z" level=info msg="Container b99f5b6bc1fef99cf374f37a5c3dfd5b70bfd73a8c258119ee068d1c3f85a2b8: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:39:02.854851 containerd[1496]: time="2025-12-12T17:39:02.854796183Z" level=info msg="CreateContainer within sandbox \"d3207cf4c48a10df2e873c3919a75daf1fa589c1c4bb4b40ceebe98fd4314e1b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b99f5b6bc1fef99cf374f37a5c3dfd5b70bfd73a8c258119ee068d1c3f85a2b8\"" Dec 12 17:39:02.855779 containerd[1496]: time="2025-12-12T17:39:02.855748536Z" level=info msg="StartContainer for \"b99f5b6bc1fef99cf374f37a5c3dfd5b70bfd73a8c258119ee068d1c3f85a2b8\"" Dec 12 17:39:02.857438 containerd[1496]: time="2025-12-12T17:39:02.857412033Z" level=info msg="connecting to shim b99f5b6bc1fef99cf374f37a5c3dfd5b70bfd73a8c258119ee068d1c3f85a2b8" address="unix:///run/containerd/s/1b07124d9f077595c43d9434964069fbd92e830601e5fcce1b74a5483fa72855" protocol=ttrpc version=3 Dec 12 17:39:02.894293 systemd[1]: Started cri-containerd-b99f5b6bc1fef99cf374f37a5c3dfd5b70bfd73a8c258119ee068d1c3f85a2b8.scope - libcontainer container b99f5b6bc1fef99cf374f37a5c3dfd5b70bfd73a8c258119ee068d1c3f85a2b8. Dec 12 17:39:02.979056 containerd[1496]: time="2025-12-12T17:39:02.979016258Z" level=info msg="StartContainer for \"b99f5b6bc1fef99cf374f37a5c3dfd5b70bfd73a8c258119ee068d1c3f85a2b8\" returns successfully" Dec 12 17:39:03.099429 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:39:03.099548 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:39:03.309919 kubelet[2648]: I1212 17:39:03.309865 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1bf29b9-0dac-4c5d-b6f7-02c56a288948-whisker-backend-key-pair\") pod \"f1bf29b9-0dac-4c5d-b6f7-02c56a288948\" (UID: \"f1bf29b9-0dac-4c5d-b6f7-02c56a288948\") " Dec 12 17:39:03.309919 kubelet[2648]: I1212 17:39:03.309912 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1bf29b9-0dac-4c5d-b6f7-02c56a288948-whisker-ca-bundle\") pod \"f1bf29b9-0dac-4c5d-b6f7-02c56a288948\" (UID: \"f1bf29b9-0dac-4c5d-b6f7-02c56a288948\") " Dec 12 17:39:03.310336 kubelet[2648]: I1212 17:39:03.309946 2648 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6xfz\" (UniqueName: \"kubernetes.io/projected/f1bf29b9-0dac-4c5d-b6f7-02c56a288948-kube-api-access-x6xfz\") pod \"f1bf29b9-0dac-4c5d-b6f7-02c56a288948\" (UID: \"f1bf29b9-0dac-4c5d-b6f7-02c56a288948\") " Dec 12 17:39:03.317830 kubelet[2648]: I1212 17:39:03.317750 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1bf29b9-0dac-4c5d-b6f7-02c56a288948-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f1bf29b9-0dac-4c5d-b6f7-02c56a288948" (UID: "f1bf29b9-0dac-4c5d-b6f7-02c56a288948"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:39:03.320574 kubelet[2648]: I1212 17:39:03.320527 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1bf29b9-0dac-4c5d-b6f7-02c56a288948-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f1bf29b9-0dac-4c5d-b6f7-02c56a288948" (UID: "f1bf29b9-0dac-4c5d-b6f7-02c56a288948"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:39:03.320574 kubelet[2648]: I1212 17:39:03.320556 2648 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1bf29b9-0dac-4c5d-b6f7-02c56a288948-kube-api-access-x6xfz" (OuterVolumeSpecName: "kube-api-access-x6xfz") pod "f1bf29b9-0dac-4c5d-b6f7-02c56a288948" (UID: "f1bf29b9-0dac-4c5d-b6f7-02c56a288948"). InnerVolumeSpecName "kube-api-access-x6xfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:39:03.389328 systemd[1]: Removed slice kubepods-besteffort-podf1bf29b9_0dac_4c5d_b6f7_02c56a288948.slice - libcontainer container kubepods-besteffort-podf1bf29b9_0dac_4c5d_b6f7_02c56a288948.slice. Dec 12 17:39:03.410904 kubelet[2648]: I1212 17:39:03.410843 2648 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1bf29b9-0dac-4c5d-b6f7-02c56a288948-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 12 17:39:03.410904 kubelet[2648]: I1212 17:39:03.410891 2648 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x6xfz\" (UniqueName: \"kubernetes.io/projected/f1bf29b9-0dac-4c5d-b6f7-02c56a288948-kube-api-access-x6xfz\") on node \"localhost\" DevicePath \"\"" Dec 12 17:39:03.410904 kubelet[2648]: I1212 17:39:03.410905 2648 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1bf29b9-0dac-4c5d-b6f7-02c56a288948-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 12 17:39:03.498844 systemd[1]: var-lib-kubelet-pods-f1bf29b9\x2d0dac\x2d4c5d\x2db6f7\x2d02c56a288948-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx6xfz.mount: Deactivated successfully. Dec 12 17:39:03.498935 systemd[1]: var-lib-kubelet-pods-f1bf29b9\x2d0dac\x2d4c5d\x2db6f7\x2d02c56a288948-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:39:03.546853 kubelet[2648]: I1212 17:39:03.546041 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h5d6s" podStartSLOduration=2.058275452 podStartE2EDuration="13.546022163s" podCreationTimestamp="2025-12-12 17:38:50 +0000 UTC" firstStartedPulling="2025-12-12 17:38:51.305850446 +0000 UTC m=+22.011307645" lastFinishedPulling="2025-12-12 17:39:02.793597157 +0000 UTC m=+33.499054356" observedRunningTime="2025-12-12 17:39:03.544841764 +0000 UTC m=+34.250298963" watchObservedRunningTime="2025-12-12 17:39:03.546022163 +0000 UTC m=+34.251479362" Dec 12 17:39:03.567752 systemd[1]: Created slice kubepods-besteffort-pod6cc7e8ab_0717_4a7a_9026_81374c9aefc3.slice - libcontainer container kubepods-besteffort-pod6cc7e8ab_0717_4a7a_9026_81374c9aefc3.slice. Dec 12 17:39:03.611980 kubelet[2648]: I1212 17:39:03.611934 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6cc7e8ab-0717-4a7a-9026-81374c9aefc3-whisker-backend-key-pair\") pod \"whisker-86d56bdcb5-fxbvd\" (UID: \"6cc7e8ab-0717-4a7a-9026-81374c9aefc3\") " pod="calico-system/whisker-86d56bdcb5-fxbvd" Dec 12 17:39:03.612130 kubelet[2648]: I1212 17:39:03.611995 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9xmc\" (UniqueName: \"kubernetes.io/projected/6cc7e8ab-0717-4a7a-9026-81374c9aefc3-kube-api-access-l9xmc\") pod \"whisker-86d56bdcb5-fxbvd\" (UID: \"6cc7e8ab-0717-4a7a-9026-81374c9aefc3\") " pod="calico-system/whisker-86d56bdcb5-fxbvd" Dec 12 17:39:03.612130 kubelet[2648]: I1212 17:39:03.612041 2648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc7e8ab-0717-4a7a-9026-81374c9aefc3-whisker-ca-bundle\") pod \"whisker-86d56bdcb5-fxbvd\" (UID: \"6cc7e8ab-0717-4a7a-9026-81374c9aefc3\") " pod="calico-system/whisker-86d56bdcb5-fxbvd" Dec 12 17:39:03.871571 containerd[1496]: time="2025-12-12T17:39:03.871510339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86d56bdcb5-fxbvd,Uid:6cc7e8ab-0717-4a7a-9026-81374c9aefc3,Namespace:calico-system,Attempt:0,}" Dec 12 17:39:04.024310 systemd-networkd[1426]: calif70cfd34605: Link UP Dec 12 17:39:04.024938 systemd-networkd[1426]: calif70cfd34605: Gained carrier Dec 12 17:39:04.037486 containerd[1496]: 2025-12-12 17:39:03.896 [INFO][3782] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:39:04.037486 containerd[1496]: 2025-12-12 17:39:03.925 [INFO][3782] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--86d56bdcb5--fxbvd-eth0 whisker-86d56bdcb5- calico-system 6cc7e8ab-0717-4a7a-9026-81374c9aefc3 871 0 2025-12-12 17:39:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86d56bdcb5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-86d56bdcb5-fxbvd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif70cfd34605 [] [] }} ContainerID="3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" Namespace="calico-system" Pod="whisker-86d56bdcb5-fxbvd" WorkloadEndpoint="localhost-k8s-whisker--86d56bdcb5--fxbvd-" Dec 12 17:39:04.037486 containerd[1496]: 2025-12-12 17:39:03.925 [INFO][3782] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" Namespace="calico-system" Pod="whisker-86d56bdcb5-fxbvd" WorkloadEndpoint="localhost-k8s-whisker--86d56bdcb5--fxbvd-eth0" Dec 12 17:39:04.037486 containerd[1496]: 2025-12-12 17:39:03.979 [INFO][3795] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" HandleID="k8s-pod-network.3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" Workload="localhost-k8s-whisker--86d56bdcb5--fxbvd-eth0" Dec 12 17:39:04.037733 containerd[1496]: 2025-12-12 17:39:03.979 [INFO][3795] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" HandleID="k8s-pod-network.3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" Workload="localhost-k8s-whisker--86d56bdcb5--fxbvd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058a9d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-86d56bdcb5-fxbvd", "timestamp":"2025-12-12 17:39:03.979516448 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:39:04.037733 containerd[1496]: 2025-12-12 17:39:03.979 [INFO][3795] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:39:04.037733 containerd[1496]: 2025-12-12 17:39:03.979 [INFO][3795] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:39:04.037733 containerd[1496]: 2025-12-12 17:39:03.979 [INFO][3795] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:39:04.037733 containerd[1496]: 2025-12-12 17:39:03.990 [INFO][3795] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" host="localhost" Dec 12 17:39:04.037733 containerd[1496]: 2025-12-12 17:39:03.995 [INFO][3795] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:39:04.037733 containerd[1496]: 2025-12-12 17:39:03.999 [INFO][3795] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:39:04.037733 containerd[1496]: 2025-12-12 17:39:04.001 [INFO][3795] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:04.037733 containerd[1496]: 2025-12-12 17:39:04.004 [INFO][3795] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:04.037733 containerd[1496]: 2025-12-12 17:39:04.004 [INFO][3795] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" host="localhost" Dec 12 17:39:04.037925 containerd[1496]: 2025-12-12 17:39:04.005 [INFO][3795] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0 Dec 12 17:39:04.037925 containerd[1496]: 2025-12-12 17:39:04.009 [INFO][3795] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" host="localhost" Dec 12 17:39:04.037925 containerd[1496]: 2025-12-12 17:39:04.014 [INFO][3795] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" host="localhost" Dec 12 17:39:04.037925 containerd[1496]: 2025-12-12 17:39:04.014 [INFO][3795] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" host="localhost" Dec 12 17:39:04.037925 containerd[1496]: 2025-12-12 17:39:04.014 [INFO][3795] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:39:04.037925 containerd[1496]: 2025-12-12 17:39:04.014 [INFO][3795] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" HandleID="k8s-pod-network.3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" Workload="localhost-k8s-whisker--86d56bdcb5--fxbvd-eth0" Dec 12 17:39:04.038037 containerd[1496]: 2025-12-12 17:39:04.017 [INFO][3782] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" Namespace="calico-system" Pod="whisker-86d56bdcb5-fxbvd" WorkloadEndpoint="localhost-k8s-whisker--86d56bdcb5--fxbvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86d56bdcb5--fxbvd-eth0", GenerateName:"whisker-86d56bdcb5-", Namespace:"calico-system", SelfLink:"", UID:"6cc7e8ab-0717-4a7a-9026-81374c9aefc3", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 39, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86d56bdcb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-86d56bdcb5-fxbvd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif70cfd34605", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:04.038037 containerd[1496]: 2025-12-12 17:39:04.017 [INFO][3782] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" Namespace="calico-system" Pod="whisker-86d56bdcb5-fxbvd" WorkloadEndpoint="localhost-k8s-whisker--86d56bdcb5--fxbvd-eth0" Dec 12 17:39:04.038128 containerd[1496]: 2025-12-12 17:39:04.017 [INFO][3782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif70cfd34605 ContainerID="3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" Namespace="calico-system" Pod="whisker-86d56bdcb5-fxbvd" WorkloadEndpoint="localhost-k8s-whisker--86d56bdcb5--fxbvd-eth0" Dec 12 17:39:04.038128 containerd[1496]: 2025-12-12 17:39:04.025 [INFO][3782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" Namespace="calico-system" Pod="whisker-86d56bdcb5-fxbvd" WorkloadEndpoint="localhost-k8s-whisker--86d56bdcb5--fxbvd-eth0" Dec 12 17:39:04.038168 containerd[1496]: 2025-12-12 17:39:04.025 [INFO][3782] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" Namespace="calico-system" Pod="whisker-86d56bdcb5-fxbvd" WorkloadEndpoint="localhost-k8s-whisker--86d56bdcb5--fxbvd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86d56bdcb5--fxbvd-eth0", GenerateName:"whisker-86d56bdcb5-", Namespace:"calico-system", SelfLink:"", UID:"6cc7e8ab-0717-4a7a-9026-81374c9aefc3", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 39, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86d56bdcb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0", Pod:"whisker-86d56bdcb5-fxbvd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif70cfd34605", MAC:"b6:aa:54:77:7c:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:04.038217 containerd[1496]: 2025-12-12 17:39:04.034 [INFO][3782] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" Namespace="calico-system" Pod="whisker-86d56bdcb5-fxbvd" WorkloadEndpoint="localhost-k8s-whisker--86d56bdcb5--fxbvd-eth0" Dec 12 17:39:04.102114 containerd[1496]: time="2025-12-12T17:39:04.102051847Z" level=info msg="connecting to shim 3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0" address="unix:///run/containerd/s/99d950edd90c2ce561b97d6bbe7d1acc9b28646780df2fbd54c6463958638c3c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:39:04.130257 systemd[1]: Started cri-containerd-3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0.scope - libcontainer container 3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0. Dec 12 17:39:04.141255 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:39:04.160927 containerd[1496]: time="2025-12-12T17:39:04.160869416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86d56bdcb5-fxbvd,Uid:6cc7e8ab-0717-4a7a-9026-81374c9aefc3,Namespace:calico-system,Attempt:0,} returns sandbox id \"3012bac0d44afd530d4041a45b39b16ab62075214f90bf3ab6237e900f5efaa0\"" Dec 12 17:39:04.163108 containerd[1496]: time="2025-12-12T17:39:04.162384865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:39:04.353033 containerd[1496]: time="2025-12-12T17:39:04.352989746Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:04.353945 containerd[1496]: time="2025-12-12T17:39:04.353912576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:39:04.354015 containerd[1496]: time="2025-12-12T17:39:04.353982018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:39:04.356025 kubelet[2648]: E1212 17:39:04.355965 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:39:04.357437 kubelet[2648]: E1212 17:39:04.357404 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:39:04.360902 kubelet[2648]: E1212 17:39:04.360832 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d699d3c6b33c4ecc88ced3536dfbb1bf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l9xmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86d56bdcb5-fxbvd_calico-system(6cc7e8ab-0717-4a7a-9026-81374c9aefc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:04.363104 containerd[1496]: time="2025-12-12T17:39:04.363017148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:39:04.510180 kubelet[2648]: I1212 17:39:04.510053 2648 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:39:04.591329 containerd[1496]: time="2025-12-12T17:39:04.591270399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:04.593362 containerd[1496]: time="2025-12-12T17:39:04.592987734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:39:04.593482 containerd[1496]: time="2025-12-12T17:39:04.593086937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:39:04.593786 kubelet[2648]: E1212 17:39:04.593670 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:39:04.593837 kubelet[2648]: E1212 17:39:04.593796 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:39:04.593979 kubelet[2648]: E1212 17:39:04.593926 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9xmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86d56bdcb5-fxbvd_calico-system(6cc7e8ab-0717-4a7a-9026-81374c9aefc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:04.597100 kubelet[2648]: E1212 17:39:04.597030 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86d56bdcb5-fxbvd" podUID="6cc7e8ab-0717-4a7a-9026-81374c9aefc3" Dec 12 17:39:04.852896 kubelet[2648]: I1212 17:39:04.852850 2648 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:39:05.385525 kubelet[2648]: I1212 17:39:05.385485 2648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1bf29b9-0dac-4c5d-b6f7-02c56a288948" path="/var/lib/kubelet/pods/f1bf29b9-0dac-4c5d-b6f7-02c56a288948/volumes" Dec 12 17:39:05.514608 kubelet[2648]: E1212 17:39:05.514551 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86d56bdcb5-fxbvd" podUID="6cc7e8ab-0717-4a7a-9026-81374c9aefc3" Dec 12 17:39:05.914178 systemd-networkd[1426]: calif70cfd34605: Gained IPv6LL Dec 12 17:39:06.085938 systemd-networkd[1426]: vxlan.calico: Link UP Dec 12 17:39:06.085950 systemd-networkd[1426]: vxlan.calico: Gained carrier Dec 12 17:39:07.962231 systemd-networkd[1426]: vxlan.calico: Gained IPv6LL Dec 12 17:39:09.798567 systemd[1]: Started sshd@7-10.0.0.95:22-10.0.0.1:46180.service - OpenSSH per-connection server daemon (10.0.0.1:46180). Dec 12 17:39:09.874231 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 46180 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:09.875787 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:09.881371 systemd-logind[1479]: New session 8 of user core. Dec 12 17:39:09.891298 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:39:10.082760 sshd[4107]: Connection closed by 10.0.0.1 port 46180 Dec 12 17:39:10.083351 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:10.087166 systemd[1]: sshd@7-10.0.0.95:22-10.0.0.1:46180.service: Deactivated successfully. Dec 12 17:39:10.091342 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:39:10.092475 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:39:10.093692 systemd-logind[1479]: Removed session 8. Dec 12 17:39:10.383868 containerd[1496]: time="2025-12-12T17:39:10.383759214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkf4h,Uid:ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4,Namespace:kube-system,Attempt:0,}" Dec 12 17:39:10.384376 containerd[1496]: time="2025-12-12T17:39:10.383764814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-94p94,Uid:b5dca236-19dc-432c-971f-14a30f71196b,Namespace:calico-system,Attempt:0,}" Dec 12 17:39:10.631578 systemd-networkd[1426]: cali184cdab06f0: Link UP Dec 12 17:39:10.632621 systemd-networkd[1426]: cali184cdab06f0: Gained carrier Dec 12 17:39:10.648326 containerd[1496]: 2025-12-12 17:39:10.544 [INFO][4125] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--jkf4h-eth0 coredns-674b8bbfcf- kube-system ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4 805 0 2025-12-12 17:38:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-jkf4h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali184cdab06f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf4h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jkf4h-" Dec 12 17:39:10.648326 containerd[1496]: 2025-12-12 17:39:10.544 [INFO][4125] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf4h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jkf4h-eth0" Dec 12 17:39:10.648326 containerd[1496]: 2025-12-12 17:39:10.584 [INFO][4155] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" HandleID="k8s-pod-network.1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" Workload="localhost-k8s-coredns--674b8bbfcf--jkf4h-eth0" Dec 12 17:39:10.648854 containerd[1496]: 2025-12-12 17:39:10.584 [INFO][4155] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" HandleID="k8s-pod-network.1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" Workload="localhost-k8s-coredns--674b8bbfcf--jkf4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000593230), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-jkf4h", "timestamp":"2025-12-12 17:39:10.584734872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:39:10.648854 containerd[1496]: 2025-12-12 17:39:10.584 [INFO][4155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:39:10.648854 containerd[1496]: 2025-12-12 17:39:10.586 [INFO][4155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:39:10.648854 containerd[1496]: 2025-12-12 17:39:10.586 [INFO][4155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:39:10.648854 containerd[1496]: 2025-12-12 17:39:10.595 [INFO][4155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" host="localhost" Dec 12 17:39:10.648854 containerd[1496]: 2025-12-12 17:39:10.600 [INFO][4155] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:39:10.648854 containerd[1496]: 2025-12-12 17:39:10.604 [INFO][4155] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:39:10.648854 containerd[1496]: 2025-12-12 17:39:10.607 [INFO][4155] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:10.648854 containerd[1496]: 2025-12-12 17:39:10.610 [INFO][4155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:10.648854 containerd[1496]: 2025-12-12 17:39:10.610 [INFO][4155] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" host="localhost" Dec 12 17:39:10.649163 containerd[1496]: 2025-12-12 17:39:10.611 [INFO][4155] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae Dec 12 17:39:10.649163 containerd[1496]: 2025-12-12 17:39:10.618 [INFO][4155] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" host="localhost" Dec 12 17:39:10.649163 containerd[1496]: 2025-12-12 17:39:10.623 [INFO][4155] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" host="localhost" Dec 12 17:39:10.649163 containerd[1496]: 2025-12-12 17:39:10.623 [INFO][4155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" host="localhost" Dec 12 17:39:10.649163 containerd[1496]: 2025-12-12 17:39:10.623 [INFO][4155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:39:10.649163 containerd[1496]: 2025-12-12 17:39:10.623 [INFO][4155] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" HandleID="k8s-pod-network.1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" Workload="localhost-k8s-coredns--674b8bbfcf--jkf4h-eth0" Dec 12 17:39:10.649278 containerd[1496]: 2025-12-12 17:39:10.626 [INFO][4125] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf4h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jkf4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jkf4h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-jkf4h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali184cdab06f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:10.649477 containerd[1496]: 2025-12-12 17:39:10.626 [INFO][4125] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf4h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jkf4h-eth0" Dec 12 17:39:10.649477 containerd[1496]: 2025-12-12 17:39:10.626 [INFO][4125] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali184cdab06f0 ContainerID="1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf4h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jkf4h-eth0" Dec 12 17:39:10.649477 containerd[1496]: 2025-12-12 17:39:10.632 [INFO][4125] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf4h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jkf4h-eth0" Dec 12 17:39:10.649757 containerd[1496]: 2025-12-12 17:39:10.633 [INFO][4125] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf4h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jkf4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jkf4h-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae", Pod:"coredns-674b8bbfcf-jkf4h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali184cdab06f0", MAC:"06:b4:db:21:d6:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:10.649757 containerd[1496]: 2025-12-12 17:39:10.645 [INFO][4125] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkf4h" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jkf4h-eth0" Dec 12 17:39:10.699191 containerd[1496]: time="2025-12-12T17:39:10.699127652Z" level=info msg="connecting to shim 1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae" address="unix:///run/containerd/s/995436a8bf517e711330760e56bde1064b9a7b4ea5aa710fdd968f3e0834ab87" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:39:10.730299 systemd[1]: Started cri-containerd-1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae.scope - libcontainer container 1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae. Dec 12 17:39:10.741737 systemd-networkd[1426]: cali5c56446f051: Link UP Dec 12 17:39:10.742152 systemd-networkd[1426]: cali5c56446f051: Gained carrier Dec 12 17:39:10.747848 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.554 [INFO][4136] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--94p94-eth0 csi-node-driver- calico-system b5dca236-19dc-432c-971f-14a30f71196b 706 0 2025-12-12 17:38:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-94p94 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5c56446f051 [] [] }} ContainerID="cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" Namespace="calico-system" Pod="csi-node-driver-94p94" WorkloadEndpoint="localhost-k8s-csi--node--driver--94p94-" Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.554 [INFO][4136] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" Namespace="calico-system" Pod="csi-node-driver-94p94" WorkloadEndpoint="localhost-k8s-csi--node--driver--94p94-eth0" Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.591 [INFO][4161] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" HandleID="k8s-pod-network.cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" Workload="localhost-k8s-csi--node--driver--94p94-eth0" Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.591 [INFO][4161] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" HandleID="k8s-pod-network.cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" Workload="localhost-k8s-csi--node--driver--94p94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137b20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-94p94", "timestamp":"2025-12-12 17:39:10.59102852 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.591 [INFO][4161] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.623 [INFO][4161] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.623 [INFO][4161] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.697 [INFO][4161] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" host="localhost" Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.709 [INFO][4161] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.714 [INFO][4161] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.717 [INFO][4161] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.722 [INFO][4161] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.722 [INFO][4161] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" host="localhost" Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.724 [INFO][4161] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7 Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.728 [INFO][4161] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" host="localhost" Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.736 [INFO][4161] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" host="localhost" Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.736 [INFO][4161] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" host="localhost" Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.737 [INFO][4161] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:39:10.757514 containerd[1496]: 2025-12-12 17:39:10.737 [INFO][4161] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" HandleID="k8s-pod-network.cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" Workload="localhost-k8s-csi--node--driver--94p94-eth0" Dec 12 17:39:10.758376 containerd[1496]: 2025-12-12 17:39:10.739 [INFO][4136] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" Namespace="calico-system" Pod="csi-node-driver-94p94" WorkloadEndpoint="localhost-k8s-csi--node--driver--94p94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--94p94-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b5dca236-19dc-432c-971f-14a30f71196b", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-94p94", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c56446f051", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:10.758376 containerd[1496]: 2025-12-12 17:39:10.739 [INFO][4136] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" Namespace="calico-system" Pod="csi-node-driver-94p94" WorkloadEndpoint="localhost-k8s-csi--node--driver--94p94-eth0" Dec 12 17:39:10.758376 containerd[1496]: 2025-12-12 17:39:10.739 [INFO][4136] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c56446f051 ContainerID="cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" Namespace="calico-system" Pod="csi-node-driver-94p94" WorkloadEndpoint="localhost-k8s-csi--node--driver--94p94-eth0" Dec 12 17:39:10.758376 containerd[1496]: 2025-12-12 17:39:10.742 [INFO][4136] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" Namespace="calico-system" Pod="csi-node-driver-94p94" WorkloadEndpoint="localhost-k8s-csi--node--driver--94p94-eth0" Dec 12 17:39:10.758376 containerd[1496]: 2025-12-12 17:39:10.742 [INFO][4136] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" Namespace="calico-system" Pod="csi-node-driver-94p94" WorkloadEndpoint="localhost-k8s-csi--node--driver--94p94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--94p94-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b5dca236-19dc-432c-971f-14a30f71196b", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7", Pod:"csi-node-driver-94p94", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5c56446f051", MAC:"3e:9b:99:56:1f:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:10.758376 containerd[1496]: 2025-12-12 17:39:10.754 [INFO][4136] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" Namespace="calico-system" Pod="csi-node-driver-94p94" WorkloadEndpoint="localhost-k8s-csi--node--driver--94p94-eth0" Dec 12 17:39:10.783637 containerd[1496]: time="2025-12-12T17:39:10.783580072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkf4h,Uid:ed7aeb60-0078-4fe4-a5c3-76fa0aebc9f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae\"" Dec 12 17:39:10.790970 containerd[1496]: time="2025-12-12T17:39:10.790887748Z" level=info msg="connecting to shim cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7" address="unix:///run/containerd/s/e4bc19e4c321dc22e9faaec1538f59649264b386c1ed015bfee8b2951520ecc7" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:39:10.791328 containerd[1496]: time="2025-12-12T17:39:10.790911188Z" level=info msg="CreateContainer within sandbox \"1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:39:10.805000 containerd[1496]: time="2025-12-12T17:39:10.804948284Z" level=info msg="Container 8047b46777c0d26ce6b75ce9fb4a6fe06bae48a01ae85b81ee27962cdcc116b1: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:39:10.817478 systemd[1]: Started cri-containerd-cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7.scope - libcontainer container cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7. Dec 12 17:39:10.823611 containerd[1496]: time="2025-12-12T17:39:10.823468179Z" level=info msg="CreateContainer within sandbox \"1f8f8ab3931da8166b8f3dd960f74dabd2aab3e2b2156c57ef0b1b0ee306a0ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8047b46777c0d26ce6b75ce9fb4a6fe06bae48a01ae85b81ee27962cdcc116b1\"" Dec 12 17:39:10.824734 containerd[1496]: time="2025-12-12T17:39:10.824693732Z" level=info msg="StartContainer for \"8047b46777c0d26ce6b75ce9fb4a6fe06bae48a01ae85b81ee27962cdcc116b1\"" Dec 12 17:39:10.828160 containerd[1496]: time="2025-12-12T17:39:10.826679225Z" level=info msg="connecting to shim 8047b46777c0d26ce6b75ce9fb4a6fe06bae48a01ae85b81ee27962cdcc116b1" address="unix:///run/containerd/s/995436a8bf517e711330760e56bde1064b9a7b4ea5aa710fdd968f3e0834ab87" protocol=ttrpc version=3 Dec 12 17:39:10.834447 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:39:10.858275 systemd[1]: Started cri-containerd-8047b46777c0d26ce6b75ce9fb4a6fe06bae48a01ae85b81ee27962cdcc116b1.scope - libcontainer container 8047b46777c0d26ce6b75ce9fb4a6fe06bae48a01ae85b81ee27962cdcc116b1. Dec 12 17:39:10.861077 containerd[1496]: time="2025-12-12T17:39:10.861026304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-94p94,Uid:b5dca236-19dc-432c-971f-14a30f71196b,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc15aeeba7f0a96c618af235b2ed320e03b06fdf25582bd6980b8ce48c6d1ac7\"" Dec 12 17:39:10.864641 containerd[1496]: time="2025-12-12T17:39:10.864595320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:39:10.894479 containerd[1496]: time="2025-12-12T17:39:10.894440278Z" level=info msg="StartContainer for \"8047b46777c0d26ce6b75ce9fb4a6fe06bae48a01ae85b81ee27962cdcc116b1\" returns successfully" Dec 12 17:39:11.077796 containerd[1496]: time="2025-12-12T17:39:11.077733649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:11.082563 containerd[1496]: time="2025-12-12T17:39:11.082452011Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:39:11.082563 containerd[1496]: time="2025-12-12T17:39:11.082525373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:39:11.082979 kubelet[2648]: E1212 17:39:11.082868 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:39:11.082979 kubelet[2648]: E1212 17:39:11.082920 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:39:11.083561 kubelet[2648]: E1212 17:39:11.083494 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwx5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-94p94_calico-system(b5dca236-19dc-432c-971f-14a30f71196b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:11.086397 containerd[1496]: time="2025-12-12T17:39:11.086111227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:39:11.300058 containerd[1496]: time="2025-12-12T17:39:11.300010839Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:11.301374 containerd[1496]: time="2025-12-12T17:39:11.301262671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:39:11.301374 containerd[1496]: time="2025-12-12T17:39:11.301329393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:39:11.301870 kubelet[2648]: E1212 17:39:11.301631 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:39:11.301870 kubelet[2648]: E1212 17:39:11.301682 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:39:11.301870 kubelet[2648]: E1212 17:39:11.301821 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwx5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-94p94_calico-system(b5dca236-19dc-432c-971f-14a30f71196b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:11.303163 kubelet[2648]: E1212 17:39:11.303086 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-94p94" podUID="b5dca236-19dc-432c-971f-14a30f71196b" Dec 12 17:39:11.383331 containerd[1496]: time="2025-12-12T17:39:11.383204166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4f66ff58-m8mvt,Uid:606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2,Namespace:calico-system,Attempt:0,}" Dec 12 17:39:11.383893 containerd[1496]: time="2025-12-12T17:39:11.383274568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776c8864b9-qgx9p,Uid:d4983885-2ae5-436d-9a3a-ccebc1e24705,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:39:11.516854 systemd-networkd[1426]: calie785f18fb1d: Link UP Dec 12 17:39:11.517331 systemd-networkd[1426]: calie785f18fb1d: Gained carrier Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.433 [INFO][4332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--776c8864b9--qgx9p-eth0 calico-apiserver-776c8864b9- calico-apiserver d4983885-2ae5-436d-9a3a-ccebc1e24705 810 0 2025-12-12 17:38:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:776c8864b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-776c8864b9-qgx9p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie785f18fb1d [] [] }} ContainerID="8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-qgx9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--qgx9p-" Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.433 [INFO][4332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-qgx9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--qgx9p-eth0" Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.463 [INFO][4358] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" HandleID="k8s-pod-network.8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" Workload="localhost-k8s-calico--apiserver--776c8864b9--qgx9p-eth0" Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.463 [INFO][4358] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" HandleID="k8s-pod-network.8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" Workload="localhost-k8s-calico--apiserver--776c8864b9--qgx9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005a5dd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-776c8864b9-qgx9p", "timestamp":"2025-12-12 17:39:11.46361646 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.463 [INFO][4358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.463 [INFO][4358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.463 [INFO][4358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.477 [INFO][4358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" host="localhost" Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.485 [INFO][4358] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.491 [INFO][4358] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.494 [INFO][4358] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.496 [INFO][4358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.496 [INFO][4358] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" host="localhost" Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.498 [INFO][4358] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774 Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.502 [INFO][4358] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" host="localhost" Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.509 [INFO][4358] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" host="localhost" Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.509 [INFO][4358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" host="localhost" Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.509 [INFO][4358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:39:11.536809 containerd[1496]: 2025-12-12 17:39:11.509 [INFO][4358] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" HandleID="k8s-pod-network.8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" Workload="localhost-k8s-calico--apiserver--776c8864b9--qgx9p-eth0" Dec 12 17:39:11.537711 containerd[1496]: 2025-12-12 17:39:11.514 [INFO][4332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-qgx9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--qgx9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--776c8864b9--qgx9p-eth0", GenerateName:"calico-apiserver-776c8864b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4983885-2ae5-436d-9a3a-ccebc1e24705", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"776c8864b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-776c8864b9-qgx9p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie785f18fb1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:11.537711 containerd[1496]: 2025-12-12 17:39:11.514 [INFO][4332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-qgx9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--qgx9p-eth0" Dec 12 17:39:11.537711 containerd[1496]: 2025-12-12 17:39:11.514 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie785f18fb1d ContainerID="8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-qgx9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--qgx9p-eth0" Dec 12 17:39:11.537711 containerd[1496]: 2025-12-12 17:39:11.517 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-qgx9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--qgx9p-eth0" Dec 12 17:39:11.537711 containerd[1496]: 2025-12-12 17:39:11.520 [INFO][4332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-qgx9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--qgx9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--776c8864b9--qgx9p-eth0", GenerateName:"calico-apiserver-776c8864b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4983885-2ae5-436d-9a3a-ccebc1e24705", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"776c8864b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774", Pod:"calico-apiserver-776c8864b9-qgx9p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie785f18fb1d", MAC:"3e:e4:c3:57:43:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:11.537711 containerd[1496]: 2025-12-12 17:39:11.534 [INFO][4332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-qgx9p" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--qgx9p-eth0" Dec 12 17:39:11.544764 kubelet[2648]: E1212 17:39:11.544600 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-94p94" podUID="b5dca236-19dc-432c-971f-14a30f71196b" Dec 12 17:39:11.566915 kubelet[2648]: I1212 17:39:11.566742 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jkf4h" podStartSLOduration=36.566709226 podStartE2EDuration="36.566709226s" podCreationTimestamp="2025-12-12 17:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:39:11.566257654 +0000 UTC m=+42.271714853" watchObservedRunningTime="2025-12-12 17:39:11.566709226 +0000 UTC m=+42.272166425" Dec 12 17:39:11.602431 containerd[1496]: time="2025-12-12T17:39:11.602345674Z" level=info msg="connecting to shim 8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774" address="unix:///run/containerd/s/d7ef16031062fd183cba7630ecbd20ff845fdeb3a26bc5896d68929fd3416237" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:39:11.632511 systemd-networkd[1426]: cali569d510e9f2: Link UP Dec 12 17:39:11.633181 systemd-networkd[1426]: cali569d510e9f2: Gained carrier Dec 12 17:39:11.635388 systemd[1]: Started cri-containerd-8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774.scope - libcontainer container 8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774. Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.434 [INFO][4326] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-eth0 calico-kube-controllers-6b4f66ff58- calico-system 606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2 807 0 2025-12-12 17:38:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b4f66ff58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6b4f66ff58-m8mvt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali569d510e9f2 [] [] }} ContainerID="63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" Namespace="calico-system" Pod="calico-kube-controllers-6b4f66ff58-m8mvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-" Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.435 [INFO][4326] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" Namespace="calico-system" Pod="calico-kube-controllers-6b4f66ff58-m8mvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-eth0" Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.466 [INFO][4356] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" HandleID="k8s-pod-network.63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" Workload="localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-eth0" Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.467 [INFO][4356] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" HandleID="k8s-pod-network.63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" Workload="localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000355e90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6b4f66ff58-m8mvt", "timestamp":"2025-12-12 17:39:11.466865225 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.467 [INFO][4356] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.509 [INFO][4356] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.510 [INFO][4356] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.580 [INFO][4356] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" host="localhost" Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.587 [INFO][4356] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.595 [INFO][4356] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.599 [INFO][4356] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.603 [INFO][4356] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.603 [INFO][4356] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" host="localhost" Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.605 [INFO][4356] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39 Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.616 [INFO][4356] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" host="localhost" Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.626 [INFO][4356] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" host="localhost" Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.627 [INFO][4356] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" host="localhost" Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.627 [INFO][4356] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:39:11.652317 containerd[1496]: 2025-12-12 17:39:11.627 [INFO][4356] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" HandleID="k8s-pod-network.63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" Workload="localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-eth0" Dec 12 17:39:11.654241 containerd[1496]: 2025-12-12 17:39:11.630 [INFO][4326] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" Namespace="calico-system" Pod="calico-kube-controllers-6b4f66ff58-m8mvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-eth0", GenerateName:"calico-kube-controllers-6b4f66ff58-", Namespace:"calico-system", SelfLink:"", UID:"606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b4f66ff58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6b4f66ff58-m8mvt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali569d510e9f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:11.654241 containerd[1496]: 2025-12-12 17:39:11.630 [INFO][4326] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" Namespace="calico-system" Pod="calico-kube-controllers-6b4f66ff58-m8mvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-eth0" Dec 12 17:39:11.654241 containerd[1496]: 2025-12-12 17:39:11.630 [INFO][4326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali569d510e9f2 ContainerID="63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" Namespace="calico-system" Pod="calico-kube-controllers-6b4f66ff58-m8mvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-eth0" Dec 12 17:39:11.654241 containerd[1496]: 2025-12-12 17:39:11.632 [INFO][4326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" Namespace="calico-system" Pod="calico-kube-controllers-6b4f66ff58-m8mvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-eth0" Dec 12 17:39:11.654241 containerd[1496]: 2025-12-12 17:39:11.633 [INFO][4326] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" Namespace="calico-system" Pod="calico-kube-controllers-6b4f66ff58-m8mvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-eth0", GenerateName:"calico-kube-controllers-6b4f66ff58-", Namespace:"calico-system", SelfLink:"", UID:"606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b4f66ff58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39", Pod:"calico-kube-controllers-6b4f66ff58-m8mvt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali569d510e9f2", MAC:"aa:87:b5:1d:13:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:11.654241 containerd[1496]: 2025-12-12 17:39:11.647 [INFO][4326] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" Namespace="calico-system" Pod="calico-kube-controllers-6b4f66ff58-m8mvt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6b4f66ff58--m8mvt-eth0" Dec 12 17:39:11.662286 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:39:11.678903 containerd[1496]: time="2025-12-12T17:39:11.678853507Z" level=info msg="connecting to shim 63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39" address="unix:///run/containerd/s/91e840bfff8ebd4b1e13d1652c5400beb7cdcfd898e1fe6cd371e104b2ca4551" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:39:11.692856 containerd[1496]: time="2025-12-12T17:39:11.692807511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776c8864b9-qgx9p,Uid:d4983885-2ae5-436d-9a3a-ccebc1e24705,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8cdb112a666b22856d8ffc1e20e0e4a426733f5073eea2ebd6af1f2fdc5e4774\"" Dec 12 17:39:11.694680 containerd[1496]: time="2025-12-12T17:39:11.694647159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:39:11.704250 systemd[1]: Started cri-containerd-63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39.scope - libcontainer container 63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39. Dec 12 17:39:11.717306 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:39:11.739303 systemd-networkd[1426]: cali184cdab06f0: Gained IPv6LL Dec 12 17:39:11.739675 containerd[1496]: time="2025-12-12T17:39:11.739643411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b4f66ff58-m8mvt,Uid:606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2,Namespace:calico-system,Attempt:0,} returns sandbox id \"63379c583f6985fcfec32eebb2b4a571b4efc003059dace38b34a2fa35a9bd39\"" Dec 12 17:39:11.888565 containerd[1496]: time="2025-12-12T17:39:11.888424446Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:11.890118 containerd[1496]: time="2025-12-12T17:39:11.890024728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:39:11.890215 containerd[1496]: time="2025-12-12T17:39:11.890115090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:39:11.890375 kubelet[2648]: E1212 17:39:11.890337 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:11.890417 kubelet[2648]: E1212 17:39:11.890388 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:11.890952 containerd[1496]: time="2025-12-12T17:39:11.890695706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:39:11.895264 kubelet[2648]: E1212 17:39:11.890651 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nc7t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-776c8864b9-qgx9p_calico-apiserver(d4983885-2ae5-436d-9a3a-ccebc1e24705): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:11.896414 kubelet[2648]: E1212 17:39:11.896377 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776c8864b9-qgx9p" podUID="d4983885-2ae5-436d-9a3a-ccebc1e24705" Dec 12 17:39:12.102144 containerd[1496]: time="2025-12-12T17:39:12.102093585Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:12.105949 containerd[1496]: time="2025-12-12T17:39:12.105898522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:39:12.106022 containerd[1496]: time="2025-12-12T17:39:12.105957923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:39:12.106416 kubelet[2648]: E1212 17:39:12.106162 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:39:12.106416 kubelet[2648]: E1212 17:39:12.106213 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:39:12.106416 kubelet[2648]: E1212 17:39:12.106361 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6xgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b4f66ff58-m8mvt_calico-system(606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:12.108735 kubelet[2648]: E1212 17:39:12.107525 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4f66ff58-m8mvt" podUID="606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2" Dec 12 17:39:12.506362 systemd-networkd[1426]: cali5c56446f051: Gained IPv6LL Dec 12 17:39:12.547625 kubelet[2648]: E1212 17:39:12.547508 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4f66ff58-m8mvt" podUID="606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2" Dec 12 17:39:12.549005 kubelet[2648]: E1212 17:39:12.548740 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776c8864b9-qgx9p" podUID="d4983885-2ae5-436d-9a3a-ccebc1e24705" Dec 12 17:39:12.550801 kubelet[2648]: E1212 17:39:12.550221 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-94p94" podUID="b5dca236-19dc-432c-971f-14a30f71196b" Dec 12 17:39:13.082255 systemd-networkd[1426]: cali569d510e9f2: Gained IPv6LL Dec 12 17:39:13.384086 containerd[1496]: time="2025-12-12T17:39:13.383914526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pddzw,Uid:b84a83ed-bde3-4881-9529-0aa87d97db76,Namespace:kube-system,Attempt:0,}" Dec 12 17:39:13.384817 containerd[1496]: time="2025-12-12T17:39:13.384424219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p5kw2,Uid:68d76035-3b6d-409d-b705-88ad3c12dd12,Namespace:calico-system,Attempt:0,}" Dec 12 17:39:13.530194 systemd-networkd[1426]: calie785f18fb1d: Gained IPv6LL Dec 12 17:39:13.550537 kubelet[2648]: E1212 17:39:13.550486 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4f66ff58-m8mvt" podUID="606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2" Dec 12 17:39:13.550969 kubelet[2648]: E1212 17:39:13.550469 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776c8864b9-qgx9p" podUID="d4983885-2ae5-436d-9a3a-ccebc1e24705" Dec 12 17:39:13.829327 systemd-networkd[1426]: cali2bdfcc49779: Link UP Dec 12 17:39:13.829930 systemd-networkd[1426]: cali2bdfcc49779: Gained carrier Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.738 [INFO][4495] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--p5kw2-eth0 goldmane-666569f655- calico-system 68d76035-3b6d-409d-b705-88ad3c12dd12 811 0 2025-12-12 17:38:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-p5kw2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2bdfcc49779 [] [] }} ContainerID="fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" Namespace="calico-system" Pod="goldmane-666569f655-p5kw2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p5kw2-" Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.738 [INFO][4495] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" Namespace="calico-system" Pod="goldmane-666569f655-p5kw2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p5kw2-eth0" Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.778 [INFO][4512] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" HandleID="k8s-pod-network.fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" Workload="localhost-k8s-goldmane--666569f655--p5kw2-eth0" Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.778 [INFO][4512] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" HandleID="k8s-pod-network.fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" Workload="localhost-k8s-goldmane--666569f655--p5kw2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-p5kw2", "timestamp":"2025-12-12 17:39:13.778057286 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.778 [INFO][4512] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.778 [INFO][4512] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.778 [INFO][4512] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.788 [INFO][4512] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" host="localhost" Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.795 [INFO][4512] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.802 [INFO][4512] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.804 [INFO][4512] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.807 [INFO][4512] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.807 [INFO][4512] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" host="localhost" Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.808 [INFO][4512] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7 Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.813 [INFO][4512] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" host="localhost" Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.823 [INFO][4512] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" host="localhost" Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.823 [INFO][4512] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" host="localhost" Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.823 [INFO][4512] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:39:13.847556 containerd[1496]: 2025-12-12 17:39:13.823 [INFO][4512] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" HandleID="k8s-pod-network.fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" Workload="localhost-k8s-goldmane--666569f655--p5kw2-eth0" Dec 12 17:39:13.848694 containerd[1496]: 2025-12-12 17:39:13.825 [INFO][4495] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" Namespace="calico-system" Pod="goldmane-666569f655-p5kw2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p5kw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--p5kw2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"68d76035-3b6d-409d-b705-88ad3c12dd12", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-p5kw2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2bdfcc49779", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:13.848694 containerd[1496]: 2025-12-12 17:39:13.825 [INFO][4495] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" Namespace="calico-system" Pod="goldmane-666569f655-p5kw2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p5kw2-eth0" Dec 12 17:39:13.848694 containerd[1496]: 2025-12-12 17:39:13.825 [INFO][4495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2bdfcc49779 ContainerID="fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" Namespace="calico-system" Pod="goldmane-666569f655-p5kw2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p5kw2-eth0" Dec 12 17:39:13.848694 containerd[1496]: 2025-12-12 17:39:13.830 [INFO][4495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" Namespace="calico-system" Pod="goldmane-666569f655-p5kw2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p5kw2-eth0" Dec 12 17:39:13.848694 containerd[1496]: 2025-12-12 17:39:13.831 [INFO][4495] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" Namespace="calico-system" Pod="goldmane-666569f655-p5kw2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p5kw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--p5kw2-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"68d76035-3b6d-409d-b705-88ad3c12dd12", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7", Pod:"goldmane-666569f655-p5kw2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2bdfcc49779", MAC:"ca:55:6d:0c:39:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:13.848694 containerd[1496]: 2025-12-12 17:39:13.842 [INFO][4495] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" Namespace="calico-system" Pod="goldmane-666569f655-p5kw2" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--p5kw2-eth0" Dec 12 17:39:13.874680 containerd[1496]: time="2025-12-12T17:39:13.874620157Z" level=info msg="connecting to shim fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7" address="unix:///run/containerd/s/ce6ef6222f50f2d11f8208e53a2863eb2cbba406ce3c90b98be0121dce662f0d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:39:13.905301 systemd[1]: Started cri-containerd-fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7.scope - libcontainer container fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7. Dec 12 17:39:13.922150 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:39:13.935563 systemd-networkd[1426]: cali29a10c8b804: Link UP Dec 12 17:39:13.936219 systemd-networkd[1426]: cali29a10c8b804: Gained carrier Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.740 [INFO][4482] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--pddzw-eth0 coredns-674b8bbfcf- kube-system b84a83ed-bde3-4881-9529-0aa87d97db76 806 0 2025-12-12 17:38:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-pddzw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali29a10c8b804 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" Namespace="kube-system" Pod="coredns-674b8bbfcf-pddzw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pddzw-" Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.741 [INFO][4482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" Namespace="kube-system" Pod="coredns-674b8bbfcf-pddzw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pddzw-eth0" Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.781 [INFO][4518] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" HandleID="k8s-pod-network.06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" Workload="localhost-k8s-coredns--674b8bbfcf--pddzw-eth0" Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.781 [INFO][4518] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" HandleID="k8s-pod-network.06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" Workload="localhost-k8s-coredns--674b8bbfcf--pddzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-pddzw", "timestamp":"2025-12-12 17:39:13.781807939 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.782 [INFO][4518] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.823 [INFO][4518] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.823 [INFO][4518] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.888 [INFO][4518] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" host="localhost" Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.897 [INFO][4518] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.903 [INFO][4518] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.906 [INFO][4518] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.909 [INFO][4518] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.909 [INFO][4518] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" host="localhost" Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.911 [INFO][4518] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348 Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.920 [INFO][4518] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" host="localhost" Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.929 [INFO][4518] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" host="localhost" Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.929 [INFO][4518] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" host="localhost" Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.929 [INFO][4518] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:39:13.958340 containerd[1496]: 2025-12-12 17:39:13.929 [INFO][4518] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" HandleID="k8s-pod-network.06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" Workload="localhost-k8s-coredns--674b8bbfcf--pddzw-eth0" Dec 12 17:39:13.959164 containerd[1496]: 2025-12-12 17:39:13.933 [INFO][4482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" Namespace="kube-system" Pod="coredns-674b8bbfcf-pddzw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pddzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pddzw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b84a83ed-bde3-4881-9529-0aa87d97db76", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-pddzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29a10c8b804", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:13.959164 containerd[1496]: 2025-12-12 17:39:13.933 [INFO][4482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" Namespace="kube-system" Pod="coredns-674b8bbfcf-pddzw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pddzw-eth0" Dec 12 17:39:13.959164 containerd[1496]: 2025-12-12 17:39:13.933 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29a10c8b804 ContainerID="06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" Namespace="kube-system" Pod="coredns-674b8bbfcf-pddzw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pddzw-eth0" Dec 12 17:39:13.959164 containerd[1496]: 2025-12-12 17:39:13.936 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" Namespace="kube-system" Pod="coredns-674b8bbfcf-pddzw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pddzw-eth0" Dec 12 17:39:13.959164 containerd[1496]: 2025-12-12 17:39:13.936 [INFO][4482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" Namespace="kube-system" Pod="coredns-674b8bbfcf-pddzw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pddzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pddzw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b84a83ed-bde3-4881-9529-0aa87d97db76", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348", Pod:"coredns-674b8bbfcf-pddzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29a10c8b804", MAC:"b6:7e:05:5e:86:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:13.959164 containerd[1496]: 2025-12-12 17:39:13.953 [INFO][4482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" Namespace="kube-system" Pod="coredns-674b8bbfcf-pddzw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pddzw-eth0" Dec 12 17:39:13.974154 containerd[1496]: time="2025-12-12T17:39:13.974086421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p5kw2,Uid:68d76035-3b6d-409d-b705-88ad3c12dd12,Namespace:calico-system,Attempt:0,} returns sandbox id \"fea9f56fd1cb65d7d880bec15e3af282e4043e4de5cbb349548c908a409d9bf7\"" Dec 12 17:39:13.981596 containerd[1496]: time="2025-12-12T17:39:13.980098529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:39:13.994372 containerd[1496]: time="2025-12-12T17:39:13.994312041Z" level=info msg="connecting to shim 06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348" address="unix:///run/containerd/s/6aa392b64aaed7d589c2f0defe713705a31b47330f898f06be86930f9495f5bb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:39:14.024304 systemd[1]: Started cri-containerd-06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348.scope - libcontainer container 06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348. Dec 12 17:39:14.037815 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:39:14.060622 containerd[1496]: time="2025-12-12T17:39:14.060456125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pddzw,Uid:b84a83ed-bde3-4881-9529-0aa87d97db76,Namespace:kube-system,Attempt:0,} returns sandbox id \"06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348\"" Dec 12 17:39:14.068944 containerd[1496]: time="2025-12-12T17:39:14.068898089Z" level=info msg="CreateContainer within sandbox \"06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:39:14.080917 containerd[1496]: time="2025-12-12T17:39:14.080151401Z" level=info msg="Container 5017de71761c5c6408ed341c62a69736f47b8ae337f2a8857683ab22e5211d3d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:39:14.086128 containerd[1496]: time="2025-12-12T17:39:14.086081704Z" level=info msg="CreateContainer within sandbox \"06d582966a60051c4f455e53e2dc90d0cdfd41014523e49dce2c7e238c701348\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5017de71761c5c6408ed341c62a69736f47b8ae337f2a8857683ab22e5211d3d\"" Dec 12 17:39:14.086734 containerd[1496]: time="2025-12-12T17:39:14.086708199Z" level=info msg="StartContainer for \"5017de71761c5c6408ed341c62a69736f47b8ae337f2a8857683ab22e5211d3d\"" Dec 12 17:39:14.087900 containerd[1496]: time="2025-12-12T17:39:14.087865987Z" level=info msg="connecting to shim 5017de71761c5c6408ed341c62a69736f47b8ae337f2a8857683ab22e5211d3d" address="unix:///run/containerd/s/6aa392b64aaed7d589c2f0defe713705a31b47330f898f06be86930f9495f5bb" protocol=ttrpc version=3 Dec 12 17:39:14.113292 systemd[1]: Started cri-containerd-5017de71761c5c6408ed341c62a69736f47b8ae337f2a8857683ab22e5211d3d.scope - libcontainer container 5017de71761c5c6408ed341c62a69736f47b8ae337f2a8857683ab22e5211d3d. Dec 12 17:39:14.145406 containerd[1496]: time="2025-12-12T17:39:14.145365818Z" level=info msg="StartContainer for \"5017de71761c5c6408ed341c62a69736f47b8ae337f2a8857683ab22e5211d3d\" returns successfully" Dec 12 17:39:14.195206 containerd[1496]: time="2025-12-12T17:39:14.195146741Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:14.196166 containerd[1496]: time="2025-12-12T17:39:14.196122605Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:39:14.196343 containerd[1496]: time="2025-12-12T17:39:14.196294689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:39:14.196717 kubelet[2648]: E1212 17:39:14.196367 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:39:14.196830 kubelet[2648]: E1212 17:39:14.196724 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:39:14.196987 kubelet[2648]: E1212 17:39:14.196922 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcjg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p5kw2_calico-system(68d76035-3b6d-409d-b705-88ad3c12dd12): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:14.198440 kubelet[2648]: E1212 17:39:14.198353 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p5kw2" podUID="68d76035-3b6d-409d-b705-88ad3c12dd12" Dec 12 17:39:14.383901 containerd[1496]: time="2025-12-12T17:39:14.383738061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776c8864b9-2npcx,Uid:29fa9e34-8cce-4225-b8c3-b537c398886e,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:39:14.500577 systemd-networkd[1426]: cali8ac88936df2: Link UP Dec 12 17:39:14.501155 systemd-networkd[1426]: cali8ac88936df2: Gained carrier Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.422 [INFO][4674] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--776c8864b9--2npcx-eth0 calico-apiserver-776c8864b9- calico-apiserver 29fa9e34-8cce-4225-b8c3-b537c398886e 809 0 2025-12-12 17:38:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:776c8864b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-776c8864b9-2npcx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8ac88936df2 [] [] }} ContainerID="9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-2npcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--2npcx-" Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.422 [INFO][4674] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-2npcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--2npcx-eth0" Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.453 [INFO][4688] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" HandleID="k8s-pod-network.9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" Workload="localhost-k8s-calico--apiserver--776c8864b9--2npcx-eth0" Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.453 [INFO][4688] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" HandleID="k8s-pod-network.9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" Workload="localhost-k8s-calico--apiserver--776c8864b9--2npcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c30a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-776c8864b9-2npcx", "timestamp":"2025-12-12 17:39:14.453544109 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.453 [INFO][4688] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.453 [INFO][4688] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.454 [INFO][4688] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.464 [INFO][4688] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" host="localhost" Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.469 [INFO][4688] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.475 [INFO][4688] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.477 [INFO][4688] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.480 [INFO][4688] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.480 [INFO][4688] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" host="localhost" Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.483 [INFO][4688] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283 Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.487 [INFO][4688] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" host="localhost" Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.496 [INFO][4688] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" host="localhost" Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.496 [INFO][4688] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" host="localhost" Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.496 [INFO][4688] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:39:14.519207 containerd[1496]: 2025-12-12 17:39:14.496 [INFO][4688] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" HandleID="k8s-pod-network.9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" Workload="localhost-k8s-calico--apiserver--776c8864b9--2npcx-eth0" Dec 12 17:39:14.519962 containerd[1496]: 2025-12-12 17:39:14.498 [INFO][4674] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-2npcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--2npcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--776c8864b9--2npcx-eth0", GenerateName:"calico-apiserver-776c8864b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"29fa9e34-8cce-4225-b8c3-b537c398886e", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"776c8864b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-776c8864b9-2npcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ac88936df2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:14.519962 containerd[1496]: 2025-12-12 17:39:14.498 [INFO][4674] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-2npcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--2npcx-eth0" Dec 12 17:39:14.519962 containerd[1496]: 2025-12-12 17:39:14.498 [INFO][4674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ac88936df2 ContainerID="9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-2npcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--2npcx-eth0" Dec 12 17:39:14.519962 containerd[1496]: 2025-12-12 17:39:14.501 [INFO][4674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-2npcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--2npcx-eth0" Dec 12 17:39:14.519962 containerd[1496]: 2025-12-12 17:39:14.502 [INFO][4674] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-2npcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--2npcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--776c8864b9--2npcx-eth0", GenerateName:"calico-apiserver-776c8864b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"29fa9e34-8cce-4225-b8c3-b537c398886e", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"776c8864b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283", Pod:"calico-apiserver-776c8864b9-2npcx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ac88936df2", MAC:"86:65:24:12:02:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:14.519962 containerd[1496]: 2025-12-12 17:39:14.515 [INFO][4674] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" Namespace="calico-apiserver" Pod="calico-apiserver-776c8864b9-2npcx" WorkloadEndpoint="localhost-k8s-calico--apiserver--776c8864b9--2npcx-eth0" Dec 12 17:39:14.540153 containerd[1496]: time="2025-12-12T17:39:14.540101122Z" level=info msg="connecting to shim 9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283" address="unix:///run/containerd/s/3f8df894ad90b3b132b8d314604af41f9c9803fc92cb2665c4a226173ae43efe" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:39:14.557709 kubelet[2648]: E1212 17:39:14.557318 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p5kw2" podUID="68d76035-3b6d-409d-b705-88ad3c12dd12" Dec 12 17:39:14.570142 kubelet[2648]: I1212 17:39:14.570053 2648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pddzw" podStartSLOduration=39.570033206 podStartE2EDuration="39.570033206s" podCreationTimestamp="2025-12-12 17:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:39:14.569506993 +0000 UTC m=+45.274964192" watchObservedRunningTime="2025-12-12 17:39:14.570033206 +0000 UTC m=+45.275490405" Dec 12 17:39:14.574270 systemd[1]: Started cri-containerd-9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283.scope - libcontainer container 9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283. Dec 12 17:39:14.607615 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:39:14.638508 containerd[1496]: time="2025-12-12T17:39:14.638381738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-776c8864b9-2npcx,Uid:29fa9e34-8cce-4225-b8c3-b537c398886e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9bd052a824ac2f73e90e8b4c38a7a0f7a730c268875098a838025c6cffce4283\"" Dec 12 17:39:14.641274 containerd[1496]: time="2025-12-12T17:39:14.641227447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:39:14.862370 containerd[1496]: time="2025-12-12T17:39:14.862306153Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:14.871767 containerd[1496]: time="2025-12-12T17:39:14.871712820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:39:14.871897 containerd[1496]: time="2025-12-12T17:39:14.871775422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:39:14.872057 kubelet[2648]: E1212 17:39:14.871987 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:14.872057 kubelet[2648]: E1212 17:39:14.872050 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:14.872283 kubelet[2648]: E1212 17:39:14.872229 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6shff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-776c8864b9-2npcx_calico-apiserver(29fa9e34-8cce-4225-b8c3-b537c398886e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:14.873638 kubelet[2648]: E1212 17:39:14.873549 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776c8864b9-2npcx" podUID="29fa9e34-8cce-4225-b8c3-b537c398886e" Dec 12 17:39:15.003208 systemd-networkd[1426]: cali2bdfcc49779: Gained IPv6LL Dec 12 17:39:15.098007 systemd[1]: Started sshd@8-10.0.0.95:22-10.0.0.1:34300.service - OpenSSH per-connection server daemon (10.0.0.1:34300). Dec 12 17:39:15.174023 sshd[4756]: Accepted publickey for core from 10.0.0.1 port 34300 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:15.176992 sshd-session[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:15.181776 systemd-logind[1479]: New session 9 of user core. Dec 12 17:39:15.187232 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:39:15.322214 systemd-networkd[1426]: cali29a10c8b804: Gained IPv6LL Dec 12 17:39:15.404087 sshd[4759]: Connection closed by 10.0.0.1 port 34300 Dec 12 17:39:15.404435 sshd-session[4756]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:15.408165 systemd[1]: sshd@8-10.0.0.95:22-10.0.0.1:34300.service: Deactivated successfully. Dec 12 17:39:15.409903 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:39:15.410663 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:39:15.412026 systemd-logind[1479]: Removed session 9. Dec 12 17:39:15.511104 kubelet[2648]: I1212 17:39:15.511048 2648 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:39:15.564787 kubelet[2648]: E1212 17:39:15.564739 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776c8864b9-2npcx" podUID="29fa9e34-8cce-4225-b8c3-b537c398886e" Dec 12 17:39:15.565360 kubelet[2648]: E1212 17:39:15.565116 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p5kw2" podUID="68d76035-3b6d-409d-b705-88ad3c12dd12" Dec 12 17:39:16.410298 systemd-networkd[1426]: cali8ac88936df2: Gained IPv6LL Dec 12 17:39:16.567134 kubelet[2648]: E1212 17:39:16.566289 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776c8864b9-2npcx" podUID="29fa9e34-8cce-4225-b8c3-b537c398886e" Dec 12 17:39:17.384944 containerd[1496]: time="2025-12-12T17:39:17.384817587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:39:17.678869 containerd[1496]: time="2025-12-12T17:39:17.678698400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:17.686013 containerd[1496]: time="2025-12-12T17:39:17.685923523Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:39:17.686169 containerd[1496]: time="2025-12-12T17:39:17.686011885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:39:17.686257 kubelet[2648]: E1212 17:39:17.686203 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:39:17.686551 kubelet[2648]: E1212 17:39:17.686272 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:39:17.686551 kubelet[2648]: E1212 17:39:17.686390 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d699d3c6b33c4ecc88ced3536dfbb1bf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l9xmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86d56bdcb5-fxbvd_calico-system(6cc7e8ab-0717-4a7a-9026-81374c9aefc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:17.689047 containerd[1496]: time="2025-12-12T17:39:17.689009593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:39:17.889294 containerd[1496]: time="2025-12-12T17:39:17.889174325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:17.898074 containerd[1496]: time="2025-12-12T17:39:17.897981764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:39:17.898248 containerd[1496]: time="2025-12-12T17:39:17.898080886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:39:17.898278 kubelet[2648]: E1212 17:39:17.898236 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:39:17.898318 kubelet[2648]: E1212 17:39:17.898287 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:39:17.898458 kubelet[2648]: E1212 17:39:17.898418 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9xmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86d56bdcb5-fxbvd_calico-system(6cc7e8ab-0717-4a7a-9026-81374c9aefc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:17.900066 kubelet[2648]: E1212 17:39:17.900014 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86d56bdcb5-fxbvd" podUID="6cc7e8ab-0717-4a7a-9026-81374c9aefc3" Dec 12 17:39:20.420012 systemd[1]: Started sshd@9-10.0.0.95:22-10.0.0.1:34314.service - OpenSSH per-connection server daemon (10.0.0.1:34314). Dec 12 17:39:20.477247 sshd[4834]: Accepted publickey for core from 10.0.0.1 port 34314 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:20.478933 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:20.483790 systemd-logind[1479]: New session 10 of user core. Dec 12 17:39:20.492580 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:39:20.638915 sshd[4837]: Connection closed by 10.0.0.1 port 34314 Dec 12 17:39:20.639599 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:20.648491 systemd[1]: sshd@9-10.0.0.95:22-10.0.0.1:34314.service: Deactivated successfully. Dec 12 17:39:20.650458 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:39:20.651367 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:39:20.653296 systemd-logind[1479]: Removed session 10. Dec 12 17:39:20.654838 systemd[1]: Started sshd@10-10.0.0.95:22-10.0.0.1:34322.service - OpenSSH per-connection server daemon (10.0.0.1:34322). Dec 12 17:39:20.715853 sshd[4852]: Accepted publickey for core from 10.0.0.1 port 34322 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:20.716652 sshd-session[4852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:20.720752 systemd-logind[1479]: New session 11 of user core. Dec 12 17:39:20.735272 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:39:20.917151 sshd[4855]: Connection closed by 10.0.0.1 port 34322 Dec 12 17:39:20.918269 sshd-session[4852]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:20.929474 systemd[1]: sshd@10-10.0.0.95:22-10.0.0.1:34322.service: Deactivated successfully. Dec 12 17:39:20.935367 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:39:20.937337 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:39:20.942401 systemd[1]: Started sshd@11-10.0.0.95:22-10.0.0.1:45870.service - OpenSSH per-connection server daemon (10.0.0.1:45870). Dec 12 17:39:20.946430 systemd-logind[1479]: Removed session 11. Dec 12 17:39:21.011074 sshd[4867]: Accepted publickey for core from 10.0.0.1 port 45870 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:21.012749 sshd-session[4867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:21.018128 systemd-logind[1479]: New session 12 of user core. Dec 12 17:39:21.026498 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:39:21.182409 sshd[4870]: Connection closed by 10.0.0.1 port 45870 Dec 12 17:39:21.182751 sshd-session[4867]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:21.186366 systemd[1]: sshd@11-10.0.0.95:22-10.0.0.1:45870.service: Deactivated successfully. Dec 12 17:39:21.189810 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:39:21.194265 systemd-logind[1479]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:39:21.195483 systemd-logind[1479]: Removed session 12. Dec 12 17:39:26.198667 systemd[1]: Started sshd@12-10.0.0.95:22-10.0.0.1:45900.service - OpenSSH per-connection server daemon (10.0.0.1:45900). Dec 12 17:39:26.279187 sshd[4891]: Accepted publickey for core from 10.0.0.1 port 45900 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:26.280636 sshd-session[4891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:26.284460 systemd-logind[1479]: New session 13 of user core. Dec 12 17:39:26.295232 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:39:26.386081 containerd[1496]: time="2025-12-12T17:39:26.384158032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:39:26.427161 sshd[4894]: Connection closed by 10.0.0.1 port 45900 Dec 12 17:39:26.427525 sshd-session[4891]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:26.430960 systemd[1]: sshd@12-10.0.0.95:22-10.0.0.1:45900.service: Deactivated successfully. Dec 12 17:39:26.433408 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:39:26.434388 systemd-logind[1479]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:39:26.436410 systemd-logind[1479]: Removed session 13. Dec 12 17:39:26.601017 containerd[1496]: time="2025-12-12T17:39:26.600958451Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:26.601982 containerd[1496]: time="2025-12-12T17:39:26.601931270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:39:26.602073 containerd[1496]: time="2025-12-12T17:39:26.602016191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:39:26.602383 kubelet[2648]: E1212 17:39:26.602148 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:26.603514 kubelet[2648]: E1212 17:39:26.602394 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:26.603514 kubelet[2648]: E1212 17:39:26.602620 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nc7t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-776c8864b9-qgx9p_calico-apiserver(d4983885-2ae5-436d-9a3a-ccebc1e24705): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:26.603713 containerd[1496]: time="2025-12-12T17:39:26.602691564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:39:26.603867 kubelet[2648]: E1212 17:39:26.603787 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776c8864b9-qgx9p" podUID="d4983885-2ae5-436d-9a3a-ccebc1e24705" Dec 12 17:39:26.807822 containerd[1496]: time="2025-12-12T17:39:26.807763795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:26.808751 containerd[1496]: time="2025-12-12T17:39:26.808688493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:39:26.808751 containerd[1496]: time="2025-12-12T17:39:26.808732134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:39:26.808945 kubelet[2648]: E1212 17:39:26.808905 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:39:26.809017 kubelet[2648]: E1212 17:39:26.808955 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:39:26.809166 kubelet[2648]: E1212 17:39:26.809108 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6xgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b4f66ff58-m8mvt_calico-system(606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:26.810600 kubelet[2648]: E1212 17:39:26.810565 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4f66ff58-m8mvt" podUID="606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2" Dec 12 17:39:27.387936 containerd[1496]: time="2025-12-12T17:39:27.387893348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:39:27.578553 containerd[1496]: time="2025-12-12T17:39:27.578465608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:27.582203 containerd[1496]: time="2025-12-12T17:39:27.582161839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:39:27.582306 containerd[1496]: time="2025-12-12T17:39:27.582175600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:39:27.582435 kubelet[2648]: E1212 17:39:27.582373 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:39:27.582481 kubelet[2648]: E1212 17:39:27.582448 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:39:27.582610 kubelet[2648]: E1212 17:39:27.582573 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwx5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-94p94_calico-system(b5dca236-19dc-432c-971f-14a30f71196b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:27.585390 containerd[1496]: time="2025-12-12T17:39:27.585341580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:39:27.780442 containerd[1496]: time="2025-12-12T17:39:27.780301045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:27.781764 containerd[1496]: time="2025-12-12T17:39:27.781702792Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:39:27.781836 containerd[1496]: time="2025-12-12T17:39:27.781783794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:39:27.781994 kubelet[2648]: E1212 17:39:27.781952 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:39:27.782376 kubelet[2648]: E1212 17:39:27.782005 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:39:27.782376 kubelet[2648]: E1212 17:39:27.782149 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwx5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-94p94_calico-system(b5dca236-19dc-432c-971f-14a30f71196b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:27.783375 kubelet[2648]: E1212 17:39:27.783318 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-94p94" podUID="b5dca236-19dc-432c-971f-14a30f71196b" Dec 12 17:39:28.383777 containerd[1496]: time="2025-12-12T17:39:28.383728146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:39:28.620740 containerd[1496]: time="2025-12-12T17:39:28.620670162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:28.622637 containerd[1496]: time="2025-12-12T17:39:28.621580219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:39:28.622637 containerd[1496]: time="2025-12-12T17:39:28.621624100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:39:28.622691 kubelet[2648]: E1212 17:39:28.621781 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:39:28.622691 kubelet[2648]: E1212 17:39:28.621825 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:39:28.622691 kubelet[2648]: E1212 17:39:28.621976 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcjg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p5kw2_calico-system(68d76035-3b6d-409d-b705-88ad3c12dd12): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:28.629168 kubelet[2648]: E1212 17:39:28.623250 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p5kw2" podUID="68d76035-3b6d-409d-b705-88ad3c12dd12" Dec 12 17:39:31.387506 containerd[1496]: time="2025-12-12T17:39:31.387270679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:39:31.447134 systemd[1]: Started sshd@13-10.0.0.95:22-10.0.0.1:60998.service - OpenSSH per-connection server daemon (10.0.0.1:60998). Dec 12 17:39:31.516466 sshd[4917]: Accepted publickey for core from 10.0.0.1 port 60998 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:31.517888 sshd-session[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:31.521898 systemd-logind[1479]: New session 14 of user core. Dec 12 17:39:31.544546 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:39:31.594428 containerd[1496]: time="2025-12-12T17:39:31.594317517Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:31.595600 containerd[1496]: time="2025-12-12T17:39:31.595552540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:39:31.595781 containerd[1496]: time="2025-12-12T17:39:31.595648662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:39:31.596648 kubelet[2648]: E1212 17:39:31.596413 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:31.596648 kubelet[2648]: E1212 17:39:31.596478 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:31.597410 kubelet[2648]: E1212 17:39:31.597005 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6shff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-776c8864b9-2npcx_calico-apiserver(29fa9e34-8cce-4225-b8c3-b537c398886e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:31.598648 kubelet[2648]: E1212 17:39:31.598621 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776c8864b9-2npcx" podUID="29fa9e34-8cce-4225-b8c3-b537c398886e" Dec 12 17:39:31.701445 sshd[4920]: Connection closed by 10.0.0.1 port 60998 Dec 12 17:39:31.701719 sshd-session[4917]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:31.707193 systemd[1]: sshd@13-10.0.0.95:22-10.0.0.1:60998.service: Deactivated successfully. Dec 12 17:39:31.709210 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:39:31.710120 systemd-logind[1479]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:39:31.711899 systemd-logind[1479]: Removed session 14. Dec 12 17:39:33.387881 kubelet[2648]: E1212 17:39:33.387702 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86d56bdcb5-fxbvd" podUID="6cc7e8ab-0717-4a7a-9026-81374c9aefc3" Dec 12 17:39:36.714146 systemd[1]: Started sshd@14-10.0.0.95:22-10.0.0.1:32788.service - OpenSSH per-connection server daemon (10.0.0.1:32788). Dec 12 17:39:36.781672 sshd[4939]: Accepted publickey for core from 10.0.0.1 port 32788 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:36.783670 sshd-session[4939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:36.788929 systemd-logind[1479]: New session 15 of user core. Dec 12 17:39:36.798255 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:39:36.963990 sshd[4942]: Connection closed by 10.0.0.1 port 32788 Dec 12 17:39:36.964543 sshd-session[4939]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:36.968883 systemd[1]: sshd@14-10.0.0.95:22-10.0.0.1:32788.service: Deactivated successfully. Dec 12 17:39:36.970759 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:39:36.973201 systemd-logind[1479]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:39:36.976818 systemd-logind[1479]: Removed session 15. Dec 12 17:39:39.386224 kubelet[2648]: E1212 17:39:39.386173 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-94p94" podUID="b5dca236-19dc-432c-971f-14a30f71196b" Dec 12 17:39:41.393163 kubelet[2648]: E1212 17:39:41.393108 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776c8864b9-qgx9p" podUID="d4983885-2ae5-436d-9a3a-ccebc1e24705" Dec 12 17:39:41.978701 systemd[1]: Started sshd@15-10.0.0.95:22-10.0.0.1:42278.service - OpenSSH per-connection server daemon (10.0.0.1:42278). Dec 12 17:39:42.043419 sshd[4958]: Accepted publickey for core from 10.0.0.1 port 42278 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:42.046564 sshd-session[4958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:42.051805 systemd-logind[1479]: New session 16 of user core. Dec 12 17:39:42.063307 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:39:42.196159 sshd[4961]: Connection closed by 10.0.0.1 port 42278 Dec 12 17:39:42.196507 sshd-session[4958]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:42.207957 systemd[1]: sshd@15-10.0.0.95:22-10.0.0.1:42278.service: Deactivated successfully. Dec 12 17:39:42.212262 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:39:42.213342 systemd-logind[1479]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:39:42.216298 systemd[1]: Started sshd@16-10.0.0.95:22-10.0.0.1:42280.service - OpenSSH per-connection server daemon (10.0.0.1:42280). Dec 12 17:39:42.217044 systemd-logind[1479]: Removed session 16. Dec 12 17:39:42.274134 sshd[4974]: Accepted publickey for core from 10.0.0.1 port 42280 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:42.275652 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:42.280245 systemd-logind[1479]: New session 17 of user core. Dec 12 17:39:42.287251 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:39:42.384132 kubelet[2648]: E1212 17:39:42.384046 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4f66ff58-m8mvt" podUID="606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2" Dec 12 17:39:42.384993 kubelet[2648]: E1212 17:39:42.384632 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p5kw2" podUID="68d76035-3b6d-409d-b705-88ad3c12dd12" Dec 12 17:39:42.519126 sshd[4978]: Connection closed by 10.0.0.1 port 42280 Dec 12 17:39:42.520206 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:42.527693 systemd[1]: sshd@16-10.0.0.95:22-10.0.0.1:42280.service: Deactivated successfully. Dec 12 17:39:42.530868 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:39:42.531776 systemd-logind[1479]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:39:42.534474 systemd[1]: Started sshd@17-10.0.0.95:22-10.0.0.1:42284.service - OpenSSH per-connection server daemon (10.0.0.1:42284). Dec 12 17:39:42.536356 systemd-logind[1479]: Removed session 17. Dec 12 17:39:42.605108 sshd[4990]: Accepted publickey for core from 10.0.0.1 port 42284 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:42.606806 sshd-session[4990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:42.612202 systemd-logind[1479]: New session 18 of user core. Dec 12 17:39:42.624294 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:39:43.184646 sshd[4993]: Connection closed by 10.0.0.1 port 42284 Dec 12 17:39:43.185291 sshd-session[4990]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:43.197599 systemd[1]: sshd@17-10.0.0.95:22-10.0.0.1:42284.service: Deactivated successfully. Dec 12 17:39:43.200152 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:39:43.203178 systemd-logind[1479]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:39:43.208207 systemd[1]: Started sshd@18-10.0.0.95:22-10.0.0.1:42286.service - OpenSSH per-connection server daemon (10.0.0.1:42286). Dec 12 17:39:43.209610 systemd-logind[1479]: Removed session 18. Dec 12 17:39:43.266691 sshd[5010]: Accepted publickey for core from 10.0.0.1 port 42286 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:43.268177 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:43.272339 systemd-logind[1479]: New session 19 of user core. Dec 12 17:39:43.280294 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:39:43.572156 sshd[5014]: Connection closed by 10.0.0.1 port 42286 Dec 12 17:39:43.573799 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:43.585010 systemd[1]: sshd@18-10.0.0.95:22-10.0.0.1:42286.service: Deactivated successfully. Dec 12 17:39:43.587830 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:39:43.589462 systemd-logind[1479]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:39:43.594195 systemd[1]: Started sshd@19-10.0.0.95:22-10.0.0.1:42292.service - OpenSSH per-connection server daemon (10.0.0.1:42292). Dec 12 17:39:43.595592 systemd-logind[1479]: Removed session 19. Dec 12 17:39:43.658438 sshd[5026]: Accepted publickey for core from 10.0.0.1 port 42292 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:43.661394 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:43.665615 systemd-logind[1479]: New session 20 of user core. Dec 12 17:39:43.674291 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:39:43.826480 sshd[5029]: Connection closed by 10.0.0.1 port 42292 Dec 12 17:39:43.826829 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:43.832568 systemd-logind[1479]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:39:43.832725 systemd[1]: sshd@19-10.0.0.95:22-10.0.0.1:42292.service: Deactivated successfully. Dec 12 17:39:43.835782 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:39:43.837053 systemd-logind[1479]: Removed session 20. Dec 12 17:39:46.383697 kubelet[2648]: E1212 17:39:46.383329 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776c8864b9-2npcx" podUID="29fa9e34-8cce-4225-b8c3-b537c398886e" Dec 12 17:39:48.384855 containerd[1496]: time="2025-12-12T17:39:48.384785049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:39:48.603234 containerd[1496]: time="2025-12-12T17:39:48.603176100Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:48.605686 containerd[1496]: time="2025-12-12T17:39:48.605644725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:39:48.605766 containerd[1496]: time="2025-12-12T17:39:48.605727764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:39:48.605932 kubelet[2648]: E1212 17:39:48.605896 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:39:48.606493 kubelet[2648]: E1212 17:39:48.606275 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:39:48.606493 kubelet[2648]: E1212 17:39:48.606440 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d699d3c6b33c4ecc88ced3536dfbb1bf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l9xmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86d56bdcb5-fxbvd_calico-system(6cc7e8ab-0717-4a7a-9026-81374c9aefc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:48.609617 containerd[1496]: time="2025-12-12T17:39:48.609573540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:39:48.822825 containerd[1496]: time="2025-12-12T17:39:48.822787424Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:48.823801 containerd[1496]: time="2025-12-12T17:39:48.823743778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:39:48.823875 containerd[1496]: time="2025-12-12T17:39:48.823791217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:39:48.824032 kubelet[2648]: E1212 17:39:48.823982 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:39:48.824145 kubelet[2648]: E1212 17:39:48.824129 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:39:48.824377 kubelet[2648]: E1212 17:39:48.824312 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9xmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86d56bdcb5-fxbvd_calico-system(6cc7e8ab-0717-4a7a-9026-81374c9aefc3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:48.825691 kubelet[2648]: E1212 17:39:48.825632 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86d56bdcb5-fxbvd" podUID="6cc7e8ab-0717-4a7a-9026-81374c9aefc3" Dec 12 17:39:48.842973 systemd[1]: Started sshd@20-10.0.0.95:22-10.0.0.1:42352.service - OpenSSH per-connection server daemon (10.0.0.1:42352). Dec 12 17:39:48.905853 sshd[5077]: Accepted publickey for core from 10.0.0.1 port 42352 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:48.907259 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:48.911045 systemd-logind[1479]: New session 21 of user core. Dec 12 17:39:48.922244 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:39:49.052976 sshd[5080]: Connection closed by 10.0.0.1 port 42352 Dec 12 17:39:49.053595 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:49.057567 systemd[1]: sshd@20-10.0.0.95:22-10.0.0.1:42352.service: Deactivated successfully. Dec 12 17:39:49.059363 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:39:49.061622 systemd-logind[1479]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:39:49.062901 systemd-logind[1479]: Removed session 21. Dec 12 17:39:53.386399 containerd[1496]: time="2025-12-12T17:39:53.386336373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:39:53.595527 containerd[1496]: time="2025-12-12T17:39:53.595466652Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:53.596602 containerd[1496]: time="2025-12-12T17:39:53.596556448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:39:53.596640 containerd[1496]: time="2025-12-12T17:39:53.596561768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:39:53.596937 kubelet[2648]: E1212 17:39:53.596892 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:39:53.597409 kubelet[2648]: E1212 17:39:53.596960 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:39:53.597409 kubelet[2648]: E1212 17:39:53.597109 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwx5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-94p94_calico-system(b5dca236-19dc-432c-971f-14a30f71196b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:53.599037 containerd[1496]: time="2025-12-12T17:39:53.599003879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:39:53.812778 containerd[1496]: time="2025-12-12T17:39:53.812658500Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:53.814460 containerd[1496]: time="2025-12-12T17:39:53.814419614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:39:53.814527 containerd[1496]: time="2025-12-12T17:39:53.814509734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:39:53.814764 kubelet[2648]: E1212 17:39:53.814700 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:39:53.814822 kubelet[2648]: E1212 17:39:53.814771 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:39:53.815265 kubelet[2648]: E1212 17:39:53.814903 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwx5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-94p94_calico-system(b5dca236-19dc-432c-971f-14a30f71196b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:53.816245 kubelet[2648]: E1212 17:39:53.816160 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-94p94" podUID="b5dca236-19dc-432c-971f-14a30f71196b" Dec 12 17:39:54.068689 systemd[1]: Started sshd@21-10.0.0.95:22-10.0.0.1:47972.service - OpenSSH per-connection server daemon (10.0.0.1:47972). Dec 12 17:39:54.123221 sshd[5094]: Accepted publickey for core from 10.0.0.1 port 47972 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:54.125453 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:54.132996 systemd-logind[1479]: New session 22 of user core. Dec 12 17:39:54.144300 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:39:54.318353 sshd[5097]: Connection closed by 10.0.0.1 port 47972 Dec 12 17:39:54.319515 sshd-session[5094]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:54.325919 systemd[1]: sshd@21-10.0.0.95:22-10.0.0.1:47972.service: Deactivated successfully. Dec 12 17:39:54.330138 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:39:54.331482 systemd-logind[1479]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:39:54.332801 systemd-logind[1479]: Removed session 22. Dec 12 17:39:54.384176 containerd[1496]: time="2025-12-12T17:39:54.383997848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:39:54.606102 containerd[1496]: time="2025-12-12T17:39:54.605930349Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:54.607542 containerd[1496]: time="2025-12-12T17:39:54.607504584Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:39:54.607612 containerd[1496]: time="2025-12-12T17:39:54.607570224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:39:54.607735 kubelet[2648]: E1212 17:39:54.607698 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:54.607968 kubelet[2648]: E1212 17:39:54.607767 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:54.607968 kubelet[2648]: E1212 17:39:54.607909 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nc7t8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-776c8864b9-qgx9p_calico-apiserver(d4983885-2ae5-436d-9a3a-ccebc1e24705): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:54.610642 kubelet[2648]: E1212 17:39:54.609355 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776c8864b9-qgx9p" podUID="d4983885-2ae5-436d-9a3a-ccebc1e24705" Dec 12 17:39:55.385292 containerd[1496]: time="2025-12-12T17:39:55.384967438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:39:55.556709 containerd[1496]: time="2025-12-12T17:39:55.556601340Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:55.557566 containerd[1496]: time="2025-12-12T17:39:55.557530097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:39:55.557636 containerd[1496]: time="2025-12-12T17:39:55.557553257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:39:55.557787 kubelet[2648]: E1212 17:39:55.557750 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:39:55.557861 kubelet[2648]: E1212 17:39:55.557802 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:39:55.558023 kubelet[2648]: E1212 17:39:55.557936 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6xgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6b4f66ff58-m8mvt_calico-system(606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:55.559200 kubelet[2648]: E1212 17:39:55.559159 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6b4f66ff58-m8mvt" podUID="606aaa1a-2e48-43ef-9bd5-c2c13f9aa6a2" Dec 12 17:39:56.384746 containerd[1496]: time="2025-12-12T17:39:56.384477826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:39:56.581246 containerd[1496]: time="2025-12-12T17:39:56.581178831Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:56.582253 containerd[1496]: time="2025-12-12T17:39:56.582216069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:39:56.582328 containerd[1496]: time="2025-12-12T17:39:56.582297309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:39:56.582488 kubelet[2648]: E1212 17:39:56.582452 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:39:56.582753 kubelet[2648]: E1212 17:39:56.582504 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:39:56.582753 kubelet[2648]: E1212 17:39:56.582632 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcjg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p5kw2_calico-system(68d76035-3b6d-409d-b705-88ad3c12dd12): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:56.583813 kubelet[2648]: E1212 17:39:56.583784 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p5kw2" podUID="68d76035-3b6d-409d-b705-88ad3c12dd12" Dec 12 17:39:59.333049 systemd[1]: Started sshd@22-10.0.0.95:22-10.0.0.1:47982.service - OpenSSH per-connection server daemon (10.0.0.1:47982). Dec 12 17:39:59.387186 containerd[1496]: time="2025-12-12T17:39:59.387119989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:39:59.405522 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 47982 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:59.407685 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:59.413207 systemd-logind[1479]: New session 23 of user core. Dec 12 17:39:59.421490 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 12 17:39:59.586540 sshd[5115]: Connection closed by 10.0.0.1 port 47982 Dec 12 17:39:59.587226 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:59.591615 systemd[1]: sshd@22-10.0.0.95:22-10.0.0.1:47982.service: Deactivated successfully. Dec 12 17:39:59.593570 systemd[1]: session-23.scope: Deactivated successfully. Dec 12 17:39:59.594286 systemd-logind[1479]: Session 23 logged out. Waiting for processes to exit. Dec 12 17:39:59.595802 systemd-logind[1479]: Removed session 23. Dec 12 17:39:59.599769 containerd[1496]: time="2025-12-12T17:39:59.599720957Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:59.600899 containerd[1496]: time="2025-12-12T17:39:59.600749076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:39:59.600899 containerd[1496]: time="2025-12-12T17:39:59.600813236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:39:59.601043 kubelet[2648]: E1212 17:39:59.601000 2648 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:59.601355 kubelet[2648]: E1212 17:39:59.601050 2648 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:59.601355 kubelet[2648]: E1212 17:39:59.601198 2648 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6shff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-776c8864b9-2npcx_calico-apiserver(29fa9e34-8cce-4225-b8c3-b537c398886e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:59.602408 kubelet[2648]: E1212 17:39:59.602372 2648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-776c8864b9-2npcx" podUID="29fa9e34-8cce-4225-b8c3-b537c398886e"