Dec 12 17:47:28.788487 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:47:28.788513 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 12 17:47:28.788523 kernel: KASLR enabled Dec 12 17:47:28.788528 kernel: efi: EFI v2.7 by EDK II Dec 12 17:47:28.788533 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Dec 12 17:47:28.788539 kernel: random: crng init done Dec 12 17:47:28.788545 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Dec 12 17:47:28.788551 kernel: secureboot: Secure boot enabled Dec 12 17:47:28.788556 kernel: ACPI: Early table checksum verification disabled Dec 12 17:47:28.788563 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Dec 12 17:47:28.788569 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:47:28.788575 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:47:28.788580 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:47:28.788586 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:47:28.788593 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:47:28.788600 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:47:28.788606 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:47:28.788612 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:47:28.788618 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:47:28.788624 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:47:28.788630 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:47:28.788636 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:47:28.788642 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:47:28.788648 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Dec 12 17:47:28.788654 kernel: Zone ranges: Dec 12 17:47:28.788661 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:47:28.788667 kernel: DMA32 empty Dec 12 17:47:28.788672 kernel: Normal empty Dec 12 17:47:28.788678 kernel: Device empty Dec 12 17:47:28.788684 kernel: Movable zone start for each node Dec 12 17:47:28.788697 kernel: Early memory node ranges Dec 12 17:47:28.788703 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Dec 12 17:47:28.788717 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Dec 12 17:47:28.788724 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Dec 12 17:47:28.788730 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Dec 12 17:47:28.788736 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Dec 12 17:47:28.788742 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Dec 12 17:47:28.788749 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Dec 12 17:47:28.788755 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Dec 12 17:47:28.788761 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:47:28.788770 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:47:28.788776 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:47:28.788782 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Dec 12 17:47:28.788789 kernel: psci: probing for conduit method from ACPI. Dec 12 17:47:28.788796 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:47:28.788803 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:47:28.788809 kernel: psci: Trusted OS migration not required Dec 12 17:47:28.788815 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:47:28.788825 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:47:28.788833 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:47:28.788840 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:47:28.788846 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:47:28.788852 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:47:28.788864 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:47:28.788871 kernel: CPU features: detected: Spectre-v4 Dec 12 17:47:28.788877 kernel: CPU features: detected: Spectre-BHB Dec 12 17:47:28.788884 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:47:28.788890 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:47:28.788896 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:47:28.788902 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:47:28.788908 kernel: alternatives: applying boot alternatives Dec 12 17:47:28.788916 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:47:28.788923 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:47:28.788929 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:47:28.788937 kernel: Fallback order for Node 0: 0 Dec 12 17:47:28.788943 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:47:28.788949 kernel: Policy zone: DMA Dec 12 17:47:28.788956 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:47:28.788962 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:47:28.788968 kernel: software IO TLB: area num 4. Dec 12 17:47:28.788974 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:47:28.788981 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Dec 12 17:47:28.788987 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:47:28.788993 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:47:28.789000 kernel: rcu: RCU event tracing is enabled. Dec 12 17:47:28.789007 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:47:28.789014 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:47:28.789021 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:47:28.789027 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:47:28.789033 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:47:28.789040 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:47:28.789046 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:47:28.789053 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:47:28.789059 kernel: GICv3: 256 SPIs implemented Dec 12 17:47:28.789065 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:47:28.789071 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:47:28.789078 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:47:28.789084 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:47:28.789091 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:47:28.789098 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:47:28.789104 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:47:28.789111 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:47:28.789117 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:47:28.789124 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:47:28.789130 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:47:28.789136 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:47:28.789143 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:47:28.789149 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:47:28.789155 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:47:28.789163 kernel: arm-pv: using stolen time PV Dec 12 17:47:28.789170 kernel: Console: colour dummy device 80x25 Dec 12 17:47:28.789176 kernel: ACPI: Core revision 20240827 Dec 12 17:47:28.789183 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:47:28.789189 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:47:28.789196 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:47:28.789202 kernel: landlock: Up and running. Dec 12 17:47:28.789208 kernel: SELinux: Initializing. Dec 12 17:47:28.789215 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:47:28.789222 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:47:28.789229 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:47:28.789235 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:47:28.789242 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:47:28.789248 kernel: Remapping and enabling EFI services. Dec 12 17:47:28.789255 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:47:28.789261 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:47:28.789268 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:47:28.789274 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:47:28.789282 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:47:28.789292 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:47:28.789299 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:47:28.789307 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:47:28.789314 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:47:28.789321 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:47:28.789328 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:47:28.789334 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:47:28.789343 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:47:28.789350 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:47:28.789356 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:47:28.789363 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:47:28.789370 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:47:28.789376 kernel: SMP: Total of 4 processors activated. Dec 12 17:47:28.789383 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:47:28.789390 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:47:28.789397 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:47:28.789404 kernel: CPU features: detected: Common not Private translations Dec 12 17:47:28.789412 kernel: CPU features: detected: CRC32 instructions Dec 12 17:47:28.789418 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:47:28.789425 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:47:28.789432 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:47:28.789439 kernel: CPU features: detected: Privileged Access Never Dec 12 17:47:28.789445 kernel: CPU features: detected: RAS Extension Support Dec 12 17:47:28.789452 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:47:28.789459 kernel: alternatives: applying system-wide alternatives Dec 12 17:47:28.789466 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:47:28.789474 kernel: Memory: 2421668K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 128284K reserved, 16384K cma-reserved) Dec 12 17:47:28.789481 kernel: devtmpfs: initialized Dec 12 17:47:28.789488 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:47:28.789495 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:47:28.789502 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:47:28.789509 kernel: 0 pages in range for non-PLT usage Dec 12 17:47:28.789515 kernel: 508400 pages in range for PLT usage Dec 12 17:47:28.789522 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:47:28.789529 kernel: SMBIOS 3.0.0 present. Dec 12 17:47:28.789537 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:47:28.789544 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:47:28.789550 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:47:28.789557 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:47:28.789569 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:47:28.789576 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:47:28.789583 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:47:28.789590 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Dec 12 17:47:28.789597 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:47:28.789605 kernel: cpuidle: using governor menu Dec 12 17:47:28.789612 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:47:28.789619 kernel: ASID allocator initialised with 32768 entries Dec 12 17:47:28.789626 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:47:28.789632 kernel: Serial: AMBA PL011 UART driver Dec 12 17:47:28.789640 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:47:28.789646 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:47:28.789653 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:47:28.789660 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:47:28.789668 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:47:28.789675 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:47:28.789682 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:47:28.789694 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:47:28.789701 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:47:28.789711 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:47:28.789718 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:47:28.789725 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:47:28.789732 kernel: ACPI: Interpreter enabled Dec 12 17:47:28.789741 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:47:28.789748 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:47:28.789755 kernel: ACPI: CPU0 has been hot-added Dec 12 17:47:28.789761 kernel: ACPI: CPU1 has been hot-added Dec 12 17:47:28.789768 kernel: ACPI: CPU2 has been hot-added Dec 12 17:47:28.789775 kernel: ACPI: CPU3 has been hot-added Dec 12 17:47:28.789782 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:47:28.789789 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:47:28.789796 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:47:28.789923 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:47:28.789991 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:47:28.790051 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:47:28.790108 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:47:28.790164 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:47:28.790173 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:47:28.790180 kernel: PCI host bridge to bus 0000:00 Dec 12 17:47:28.790246 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:47:28.790300 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:47:28.790352 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:47:28.790404 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:47:28.790485 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:47:28.790555 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:47:28.790620 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:47:28.790679 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:47:28.790765 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:47:28.790827 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:47:28.790888 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:47:28.790949 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:47:28.791010 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:47:28.791064 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:47:28.791116 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:47:28.791126 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:47:28.791133 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:47:28.791140 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:47:28.791147 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:47:28.791154 kernel: iommu: Default domain type: Translated Dec 12 17:47:28.791161 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:47:28.791167 kernel: efivars: Registered efivars operations Dec 12 17:47:28.791176 kernel: vgaarb: loaded Dec 12 17:47:28.791183 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:47:28.791190 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:47:28.791197 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:47:28.791204 kernel: pnp: PnP ACPI init Dec 12 17:47:28.791267 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:47:28.791277 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:47:28.791283 kernel: NET: Registered PF_INET protocol family Dec 12 17:47:28.791292 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:47:28.791299 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:47:28.791306 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:47:28.791313 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:47:28.791320 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:47:28.791327 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:47:28.791334 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:47:28.791341 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:47:28.791348 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:47:28.791355 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:47:28.791362 kernel: kvm [1]: HYP mode not available Dec 12 17:47:28.791369 kernel: Initialise system trusted keyrings Dec 12 17:47:28.791376 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:47:28.791383 kernel: Key type asymmetric registered Dec 12 17:47:28.791389 kernel: Asymmetric key parser 'x509' registered Dec 12 17:47:28.791396 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:47:28.791403 kernel: io scheduler mq-deadline registered Dec 12 17:47:28.791410 kernel: io scheduler kyber registered Dec 12 17:47:28.791418 kernel: io scheduler bfq registered Dec 12 17:47:28.791425 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:47:28.791432 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:47:28.791439 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:47:28.791496 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:47:28.791505 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:47:28.791512 kernel: thunder_xcv, ver 1.0 Dec 12 17:47:28.791519 kernel: thunder_bgx, ver 1.0 Dec 12 17:47:28.791526 kernel: nicpf, ver 1.0 Dec 12 17:47:28.791534 kernel: nicvf, ver 1.0 Dec 12 17:47:28.791600 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:47:28.791655 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:47:28 UTC (1765561648) Dec 12 17:47:28.791665 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:47:28.791672 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:47:28.791679 kernel: watchdog: NMI not fully supported Dec 12 17:47:28.791686 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:47:28.791701 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:47:28.791762 kernel: Segment Routing with IPv6 Dec 12 17:47:28.791770 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:47:28.791777 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:47:28.791784 kernel: Key type dns_resolver registered Dec 12 17:47:28.791791 kernel: registered taskstats version 1 Dec 12 17:47:28.791798 kernel: Loading compiled-in X.509 certificates Dec 12 17:47:28.791805 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 12 17:47:28.791812 kernel: Demotion targets for Node 0: null Dec 12 17:47:28.791819 kernel: Key type .fscrypt registered Dec 12 17:47:28.791828 kernel: Key type fscrypt-provisioning registered Dec 12 17:47:28.791835 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:47:28.791842 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:47:28.791849 kernel: ima: No architecture policies found Dec 12 17:47:28.791856 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:47:28.791863 kernel: clk: Disabling unused clocks Dec 12 17:47:28.791870 kernel: PM: genpd: Disabling unused power domains Dec 12 17:47:28.791876 kernel: Warning: unable to open an initial console. Dec 12 17:47:28.791883 kernel: Freeing unused kernel memory: 39552K Dec 12 17:47:28.791892 kernel: Run /init as init process Dec 12 17:47:28.791898 kernel: with arguments: Dec 12 17:47:28.791905 kernel: /init Dec 12 17:47:28.791912 kernel: with environment: Dec 12 17:47:28.791919 kernel: HOME=/ Dec 12 17:47:28.791925 kernel: TERM=linux Dec 12 17:47:28.791933 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:47:28.791943 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:47:28.791953 systemd[1]: Detected virtualization kvm. Dec 12 17:47:28.791960 systemd[1]: Detected architecture arm64. Dec 12 17:47:28.791967 systemd[1]: Running in initrd. Dec 12 17:47:28.791974 systemd[1]: No hostname configured, using default hostname. Dec 12 17:47:28.791982 systemd[1]: Hostname set to . Dec 12 17:47:28.791989 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:47:28.791997 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:47:28.792004 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:47:28.792013 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:47:28.792022 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:47:28.792029 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:47:28.792037 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:47:28.792045 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:47:28.792054 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 17:47:28.792062 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 17:47:28.792070 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:47:28.792077 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:47:28.792084 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:47:28.792092 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:47:28.792099 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:47:28.792106 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:47:28.792114 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:47:28.792121 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:47:28.792130 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:47:28.792137 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:47:28.792145 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:47:28.792152 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:47:28.792159 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:47:28.792167 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:47:28.792175 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:47:28.792182 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:47:28.792191 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:47:28.792199 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:47:28.792207 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:47:28.792214 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:47:28.792222 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:47:28.792229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:47:28.792237 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:47:28.792246 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:47:28.792254 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:47:28.792280 systemd-journald[244]: Collecting audit messages is disabled. Dec 12 17:47:28.792300 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:47:28.792308 systemd-journald[244]: Journal started Dec 12 17:47:28.792325 systemd-journald[244]: Runtime Journal (/run/log/journal/64314858b7e24ca98ae62e8ab53a31f4) is 6M, max 48.5M, 42.4M free. Dec 12 17:47:28.799911 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:47:28.783826 systemd-modules-load[246]: Inserted module 'overlay' Dec 12 17:47:28.802575 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:47:28.802602 kernel: Bridge firewalling registered Dec 12 17:47:28.803090 systemd-modules-load[246]: Inserted module 'br_netfilter' Dec 12 17:47:28.804722 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:47:28.805970 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:47:28.807796 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:47:28.811039 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:47:28.812633 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:47:28.815862 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:47:28.831576 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:47:28.839960 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:47:28.840190 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:47:28.842376 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:47:28.844373 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:47:28.849682 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:47:28.850869 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:47:28.853391 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:47:28.884525 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:47:28.898362 systemd-resolved[290]: Positive Trust Anchors: Dec 12 17:47:28.898383 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:47:28.898414 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:47:28.903051 systemd-resolved[290]: Defaulting to hostname 'linux'. Dec 12 17:47:28.904004 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:47:28.907820 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:47:28.961735 kernel: SCSI subsystem initialized Dec 12 17:47:28.967729 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:47:28.976748 kernel: iscsi: registered transport (tcp) Dec 12 17:47:28.989746 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:47:28.989776 kernel: QLogic iSCSI HBA Driver Dec 12 17:47:29.005538 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:47:29.021944 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:47:29.023521 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:47:29.075808 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:47:29.079022 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:47:29.142746 kernel: raid6: neonx8 gen() 15682 MB/s Dec 12 17:47:29.159740 kernel: raid6: neonx4 gen() 14774 MB/s Dec 12 17:47:29.178734 kernel: raid6: neonx2 gen() 12221 MB/s Dec 12 17:47:29.195737 kernel: raid6: neonx1 gen() 10363 MB/s Dec 12 17:47:29.214720 kernel: raid6: int64x8 gen() 6862 MB/s Dec 12 17:47:29.231504 kernel: raid6: int64x4 gen() 7246 MB/s Dec 12 17:47:29.247762 kernel: raid6: int64x2 gen() 5736 MB/s Dec 12 17:47:29.264731 kernel: raid6: int64x1 gen() 5021 MB/s Dec 12 17:47:29.264744 kernel: raid6: using algorithm neonx8 gen() 15682 MB/s Dec 12 17:47:29.281734 kernel: raid6: .... xor() 11986 MB/s, rmw enabled Dec 12 17:47:29.281754 kernel: raid6: using neon recovery algorithm Dec 12 17:47:29.287180 kernel: xor: measuring software checksum speed Dec 12 17:47:29.287196 kernel: 8regs : 21584 MB/sec Dec 12 17:47:29.287803 kernel: 32regs : 21664 MB/sec Dec 12 17:47:29.288954 kernel: arm64_neon : 28022 MB/sec Dec 12 17:47:29.288969 kernel: xor: using function: arm64_neon (28022 MB/sec) Dec 12 17:47:29.340918 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:47:29.346763 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:47:29.349111 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:47:29.380036 systemd-udevd[501]: Using default interface naming scheme 'v255'. Dec 12 17:47:29.384330 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:47:29.386272 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:47:29.410979 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Dec 12 17:47:29.432356 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:47:29.435273 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:47:29.481631 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:47:29.483985 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:47:29.525733 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:47:29.527842 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 17:47:29.533062 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:47:29.533096 kernel: GPT:9289727 != 19775487 Dec 12 17:47:29.533111 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:47:29.533120 kernel: GPT:9289727 != 19775487 Dec 12 17:47:29.533970 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:47:29.535755 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:47:29.540462 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:47:29.540584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:47:29.547839 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:47:29.550524 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:47:29.563320 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:47:29.576643 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:47:29.578118 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:47:29.580088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:47:29.598434 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 17:47:29.599662 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:47:29.608852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:47:29.610015 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:47:29.611899 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:47:29.613807 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:47:29.616303 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:47:29.618080 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:47:29.636459 disk-uuid[592]: Primary Header is updated. Dec 12 17:47:29.636459 disk-uuid[592]: Secondary Entries is updated. Dec 12 17:47:29.636459 disk-uuid[592]: Secondary Header is updated. Dec 12 17:47:29.639732 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:47:29.639777 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:47:30.649756 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:47:30.650449 disk-uuid[597]: The operation has completed successfully. Dec 12 17:47:30.671894 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:47:30.672001 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:47:30.701076 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 17:47:30.726976 sh[611]: Success Dec 12 17:47:30.739208 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:47:30.739259 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:47:30.740617 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:47:30.748759 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:47:30.775481 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:47:30.778253 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 17:47:30.790906 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 17:47:30.798568 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (624) Dec 12 17:47:30.798602 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 12 17:47:30.798612 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:47:30.803733 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:47:30.803763 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:47:30.804376 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 17:47:30.805556 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:47:30.806827 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:47:30.807505 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:47:30.808978 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:47:30.834767 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (655) Dec 12 17:47:30.837109 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:47:30.837142 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:47:30.839747 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:47:30.839789 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:47:30.846421 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:47:30.844451 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:47:30.847566 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:47:30.909652 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:47:30.912686 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:47:30.945548 systemd-networkd[799]: lo: Link UP Dec 12 17:47:30.945560 systemd-networkd[799]: lo: Gained carrier Dec 12 17:47:30.946278 systemd-networkd[799]: Enumeration completed Dec 12 17:47:30.946456 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:47:30.948080 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:47:30.948084 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:47:30.954048 ignition[704]: Ignition 2.22.0 Dec 12 17:47:30.948860 systemd[1]: Reached target network.target - Network. Dec 12 17:47:30.954054 ignition[704]: Stage: fetch-offline Dec 12 17:47:30.949236 systemd-networkd[799]: eth0: Link UP Dec 12 17:47:30.954081 ignition[704]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:47:30.949329 systemd-networkd[799]: eth0: Gained carrier Dec 12 17:47:30.954089 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:47:30.949338 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:47:30.954163 ignition[704]: parsed url from cmdline: "" Dec 12 17:47:30.954168 ignition[704]: no config URL provided Dec 12 17:47:30.954172 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:47:30.954178 ignition[704]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:47:30.954196 ignition[704]: op(1): [started] loading QEMU firmware config module Dec 12 17:47:30.954201 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:47:30.959117 ignition[704]: op(1): [finished] loading QEMU firmware config module Dec 12 17:47:30.973764 systemd-networkd[799]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:47:31.010927 ignition[704]: parsing config with SHA512: a87ae1b356cf59a8e450c8e62abaa2a60138c6b4be8a2cb4908ac936415fbb548107e0df95f220c1e7c348ebb4b3a8c93fa4a5715d6d42b3665fe5e85e0b672f Dec 12 17:47:31.016531 unknown[704]: fetched base config from "system" Dec 12 17:47:31.016592 unknown[704]: fetched user config from "qemu" Dec 12 17:47:31.017129 ignition[704]: fetch-offline: fetch-offline passed Dec 12 17:47:31.016772 systemd-resolved[290]: Detected conflict on linux IN A 10.0.0.131 Dec 12 17:47:31.017189 ignition[704]: Ignition finished successfully Dec 12 17:47:31.016780 systemd-resolved[290]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Dec 12 17:47:31.019122 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:47:31.020984 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:47:31.021724 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:47:31.055697 ignition[806]: Ignition 2.22.0 Dec 12 17:47:31.055726 ignition[806]: Stage: kargs Dec 12 17:47:31.055852 ignition[806]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:47:31.055861 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:47:31.056550 ignition[806]: kargs: kargs passed Dec 12 17:47:31.056591 ignition[806]: Ignition finished successfully Dec 12 17:47:31.062132 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:47:31.064090 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:47:31.090372 ignition[814]: Ignition 2.22.0 Dec 12 17:47:31.090390 ignition[814]: Stage: disks Dec 12 17:47:31.090513 ignition[814]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:47:31.090522 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:47:31.091270 ignition[814]: disks: disks passed Dec 12 17:47:31.094213 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:47:31.091311 ignition[814]: Ignition finished successfully Dec 12 17:47:31.095436 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:47:31.097103 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:47:31.098905 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:47:31.100657 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:47:31.102662 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:47:31.106180 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:47:31.127692 systemd-fsck[824]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 17:47:31.131914 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:47:31.135891 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:47:31.202744 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 12 17:47:31.202819 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:47:31.204028 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:47:31.206462 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:47:31.208103 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:47:31.209073 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:47:31.209110 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:47:31.209131 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:47:31.218133 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:47:31.220599 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:47:31.223764 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (832) Dec 12 17:47:31.225885 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:47:31.225907 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:47:31.228954 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:47:31.228998 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:47:31.230456 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:47:31.253669 initrd-setup-root[856]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:47:31.257856 initrd-setup-root[863]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:47:31.261543 initrd-setup-root[870]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:47:31.265434 initrd-setup-root[877]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:47:31.330152 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:47:31.332096 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:47:31.333597 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:47:31.356741 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:47:31.367077 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:47:31.381336 ignition[945]: INFO : Ignition 2.22.0 Dec 12 17:47:31.381336 ignition[945]: INFO : Stage: mount Dec 12 17:47:31.382972 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:47:31.382972 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:47:31.382972 ignition[945]: INFO : mount: mount passed Dec 12 17:47:31.382972 ignition[945]: INFO : Ignition finished successfully Dec 12 17:47:31.384057 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:47:31.387817 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:47:31.797128 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:47:31.798684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:47:31.828297 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (958) Dec 12 17:47:31.828333 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:47:31.828351 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:47:31.831785 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:47:31.831814 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:47:31.833380 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:47:31.861716 ignition[976]: INFO : Ignition 2.22.0 Dec 12 17:47:31.861716 ignition[976]: INFO : Stage: files Dec 12 17:47:31.861716 ignition[976]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:47:31.861716 ignition[976]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:47:31.865946 ignition[976]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:47:31.865946 ignition[976]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:47:31.865946 ignition[976]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:47:31.870510 ignition[976]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:47:31.870510 ignition[976]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:47:31.870510 ignition[976]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:47:31.866741 unknown[976]: wrote ssh authorized keys file for user: core Dec 12 17:47:31.876097 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:47:31.876097 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:47:31.981829 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:47:32.002833 systemd-networkd[799]: eth0: Gained IPv6LL Dec 12 17:47:32.154973 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:47:32.154973 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:47:32.158679 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:47:32.158679 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:47:32.158679 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:47:32.158679 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:47:32.158679 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:47:32.158679 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:47:32.158679 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:47:32.171303 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:47:32.171303 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:47:32.171303 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:47:32.171303 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:47:32.179573 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:47:32.179573 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Dec 12 17:47:32.504208 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:47:32.756886 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Dec 12 17:47:32.756886 ignition[976]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:47:32.760462 ignition[976]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:47:32.763352 ignition[976]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:47:32.763352 ignition[976]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:47:32.763352 ignition[976]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 17:47:32.768202 ignition[976]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:47:32.768202 ignition[976]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:47:32.768202 ignition[976]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 17:47:32.768202 ignition[976]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:47:32.779093 ignition[976]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:47:32.782954 ignition[976]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:47:32.784537 ignition[976]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:47:32.784537 ignition[976]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:47:32.784537 ignition[976]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:47:32.784537 ignition[976]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:47:32.784537 ignition[976]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:47:32.784537 ignition[976]: INFO : files: files passed Dec 12 17:47:32.784537 ignition[976]: INFO : Ignition finished successfully Dec 12 17:47:32.786408 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:47:32.788727 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:47:32.790370 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:47:32.805334 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:47:32.805442 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:47:32.809022 initrd-setup-root-after-ignition[1005]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:47:32.810356 initrd-setup-root-after-ignition[1008]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:47:32.810356 initrd-setup-root-after-ignition[1008]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:47:32.813625 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:47:32.814081 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:47:32.817078 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:47:32.819785 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:47:32.859885 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:47:32.860890 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:47:32.862286 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:47:32.863338 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:47:32.864437 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:47:32.865318 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:47:32.881773 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:47:32.884204 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:47:32.905396 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:47:32.906632 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:47:32.908602 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:47:32.910322 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:47:32.910457 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:47:32.912741 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:47:32.914797 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:47:32.916407 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:47:32.918047 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:47:32.919777 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:47:32.921658 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:47:32.923485 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:47:32.925215 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:47:32.926986 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:47:32.928733 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:47:32.930371 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:47:32.931767 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:47:32.931909 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:47:32.934171 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:47:32.935920 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:47:32.937787 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:47:32.938824 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:47:32.940690 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:47:32.940823 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:47:32.943408 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:47:32.943518 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:47:32.945450 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:47:32.946938 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:47:32.947052 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:47:32.948936 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:47:32.950354 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:47:32.952061 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:47:32.952148 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:47:32.954078 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:47:32.954155 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:47:32.955671 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:47:32.955811 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:47:32.957484 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:47:32.957590 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:47:32.959809 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:47:32.961300 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:47:32.961431 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:47:32.970293 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:47:32.971131 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:47:32.971250 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:47:32.973068 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:47:32.973166 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:47:32.978648 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:47:32.978776 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:47:32.985870 ignition[1032]: INFO : Ignition 2.22.0 Dec 12 17:47:32.985870 ignition[1032]: INFO : Stage: umount Dec 12 17:47:32.987462 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:47:32.987462 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:47:32.987462 ignition[1032]: INFO : umount: umount passed Dec 12 17:47:32.987462 ignition[1032]: INFO : Ignition finished successfully Dec 12 17:47:32.989178 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:47:32.989279 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:47:32.990655 systemd[1]: Stopped target network.target - Network. Dec 12 17:47:32.992173 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:47:32.992235 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:47:32.993916 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:47:32.993964 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:47:32.995836 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:47:32.995885 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:47:32.997695 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:47:32.997748 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:47:32.999400 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:47:33.004943 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:47:33.007385 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:47:33.012635 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:47:33.012790 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:47:33.018303 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 17:47:33.018529 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:47:33.018618 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:47:33.023044 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 17:47:33.023626 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:47:33.025126 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:47:33.025165 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:47:33.027733 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:47:33.029448 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:47:33.029504 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:47:33.031509 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:47:33.031556 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:47:33.034268 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:47:33.034310 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:47:33.036066 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:47:33.036109 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:47:33.038793 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:47:33.042496 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:47:33.042552 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:47:33.056823 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:47:33.067894 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:47:33.069098 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:47:33.069237 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:47:33.071421 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:47:33.071486 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:47:33.072583 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:47:33.072613 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:47:33.074568 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:47:33.074617 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:47:33.077302 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:47:33.077356 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:47:33.079759 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:47:33.079812 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:47:33.082566 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:47:33.082612 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:47:33.085112 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:47:33.086860 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:47:33.086922 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:47:33.089767 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:47:33.089808 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:47:33.092763 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:47:33.092810 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:47:33.096898 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 17:47:33.096949 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 17:47:33.096980 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:47:33.097225 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:47:33.100841 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:47:33.105902 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:47:33.106033 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:47:33.108079 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:47:33.110380 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:47:33.124132 systemd[1]: Switching root. Dec 12 17:47:33.151815 systemd-journald[244]: Journal stopped Dec 12 17:47:33.892701 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Dec 12 17:47:33.892791 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:47:33.892804 kernel: SELinux: policy capability open_perms=1 Dec 12 17:47:33.892814 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:47:33.892823 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:47:33.892834 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:47:33.892847 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:47:33.892862 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:47:33.892874 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:47:33.892883 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:47:33.892892 kernel: audit: type=1403 audit(1765561653.322:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:47:33.892906 systemd[1]: Successfully loaded SELinux policy in 58ms. Dec 12 17:47:33.892926 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.154ms. Dec 12 17:47:33.892938 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:47:33.892948 systemd[1]: Detected virtualization kvm. Dec 12 17:47:33.892958 systemd[1]: Detected architecture arm64. Dec 12 17:47:33.892969 systemd[1]: Detected first boot. Dec 12 17:47:33.892980 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:47:33.892990 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:47:33.893000 zram_generator::config[1078]: No configuration found. Dec 12 17:47:33.893011 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:47:33.893022 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 17:47:33.893032 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:47:33.893041 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:47:33.893053 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:47:33.893063 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:47:33.893073 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:47:33.893082 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:47:33.893093 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:47:33.893102 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:47:33.893114 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:47:33.893124 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:47:33.893134 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:47:33.893145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:47:33.893156 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:47:33.893166 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:47:33.893177 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:47:33.893187 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:47:33.893197 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:47:33.893207 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:47:33.893218 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:47:33.893229 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:47:33.893239 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:47:33.893249 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:47:33.893259 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:47:33.893272 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:47:33.893283 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:47:33.893293 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:47:33.893303 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:47:33.893313 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:47:33.893325 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:47:33.893335 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:47:33.893345 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:47:33.893355 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:47:33.893366 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:47:33.893375 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:47:33.893385 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:47:33.893395 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:47:33.893405 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:47:33.893416 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:47:33.893427 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:47:33.893436 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:47:33.893447 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:47:33.893457 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:47:33.893468 systemd[1]: Reached target machines.target - Containers. Dec 12 17:47:33.893478 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:47:33.893489 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:47:33.893500 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:47:33.893510 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:47:33.893521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:47:33.893531 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:47:33.893541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:47:33.893551 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:47:33.893561 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:47:33.893571 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:47:33.893582 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:47:33.893594 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:47:33.893605 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:47:33.893615 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:47:33.893625 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:47:33.893636 kernel: loop: module loaded Dec 12 17:47:33.893646 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:47:33.893655 kernel: fuse: init (API version 7.41) Dec 12 17:47:33.893674 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:47:33.893687 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:47:33.893697 kernel: ACPI: bus type drm_connector registered Dec 12 17:47:33.893780 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:47:33.893795 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:47:33.893806 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:47:33.893818 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 17:47:33.893829 systemd[1]: Stopped verity-setup.service. Dec 12 17:47:33.893838 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:47:33.893849 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:47:33.893858 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:47:33.893868 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:47:33.893903 systemd-journald[1153]: Collecting audit messages is disabled. Dec 12 17:47:33.893927 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:47:33.893939 systemd-journald[1153]: Journal started Dec 12 17:47:33.893962 systemd-journald[1153]: Runtime Journal (/run/log/journal/64314858b7e24ca98ae62e8ab53a31f4) is 6M, max 48.5M, 42.4M free. Dec 12 17:47:33.678151 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:47:33.693605 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:47:33.694007 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:47:33.896525 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:47:33.897228 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:47:33.899731 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:47:33.901092 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:47:33.902591 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:47:33.902815 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:47:33.904146 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:47:33.904363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:47:33.905784 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:47:33.905948 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:47:33.907150 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:47:33.907312 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:47:33.908766 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:47:33.908939 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:47:33.910338 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:47:33.910495 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:47:33.911936 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:47:33.913415 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:47:33.914925 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:47:33.917210 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:47:33.928912 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:47:33.931093 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:47:33.933074 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:47:33.934137 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:47:33.934188 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:47:33.935952 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:47:33.947876 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:47:33.948951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:47:33.950072 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:47:33.952130 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:47:33.953283 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:47:33.955854 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:47:33.957147 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:47:33.958347 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:47:33.960938 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:47:33.965755 systemd-journald[1153]: Time spent on flushing to /var/log/journal/64314858b7e24ca98ae62e8ab53a31f4 is 20.725ms for 884 entries. Dec 12 17:47:33.965755 systemd-journald[1153]: System Journal (/var/log/journal/64314858b7e24ca98ae62e8ab53a31f4) is 8M, max 195.6M, 187.6M free. Dec 12 17:47:34.013879 systemd-journald[1153]: Received client request to flush runtime journal. Dec 12 17:47:34.013926 kernel: loop0: detected capacity change from 0 to 211168 Dec 12 17:47:34.013945 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:47:33.963030 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:47:33.967072 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:47:33.968814 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:47:33.971773 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:47:33.987875 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:47:33.990470 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:47:33.994870 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:47:33.997119 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:47:34.018186 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:47:34.019910 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:47:34.024874 kernel: loop1: detected capacity change from 0 to 119840 Dec 12 17:47:34.027521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:47:34.055269 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Dec 12 17:47:34.055286 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Dec 12 17:47:34.059440 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:47:34.062016 kernel: loop2: detected capacity change from 0 to 100632 Dec 12 17:47:34.067839 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:47:34.101753 kernel: loop3: detected capacity change from 0 to 211168 Dec 12 17:47:34.107731 kernel: loop4: detected capacity change from 0 to 119840 Dec 12 17:47:34.113736 kernel: loop5: detected capacity change from 0 to 100632 Dec 12 17:47:34.118605 (sd-merge)[1216]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 17:47:34.119032 (sd-merge)[1216]: Merged extensions into '/usr'. Dec 12 17:47:34.122676 systemd[1]: Reload requested from client PID 1194 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:47:34.122820 systemd[1]: Reloading... Dec 12 17:47:34.195768 zram_generator::config[1241]: No configuration found. Dec 12 17:47:34.254470 ldconfig[1189]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:47:34.333262 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:47:34.333678 systemd[1]: Reloading finished in 210 ms. Dec 12 17:47:34.364756 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:47:34.366193 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:47:34.369066 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:47:34.384879 systemd[1]: Starting ensure-sysext.service... Dec 12 17:47:34.386562 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:47:34.388835 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:47:34.395327 systemd[1]: Reload requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:47:34.395344 systemd[1]: Reloading... Dec 12 17:47:34.400479 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:47:34.400833 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:47:34.401137 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:47:34.401416 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 17:47:34.402136 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 17:47:34.402447 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Dec 12 17:47:34.402570 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Dec 12 17:47:34.405319 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:47:34.405424 systemd-tmpfiles[1281]: Skipping /boot Dec 12 17:47:34.411122 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:47:34.411229 systemd-tmpfiles[1281]: Skipping /boot Dec 12 17:47:34.413455 systemd-udevd[1282]: Using default interface naming scheme 'v255'. Dec 12 17:47:34.447735 zram_generator::config[1309]: No configuration found. Dec 12 17:47:34.619420 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:47:34.619470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:47:34.620877 systemd[1]: Reloading finished in 225 ms. Dec 12 17:47:34.633109 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:47:34.638908 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:47:34.666105 systemd[1]: Finished ensure-sysext.service. Dec 12 17:47:34.671576 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:47:34.673818 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:47:34.674965 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:47:34.686133 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:47:34.689021 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:47:34.691257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:47:34.693946 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:47:34.695100 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:47:34.698013 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:47:34.699175 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:47:34.700576 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:47:34.705014 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:47:34.707971 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:47:34.713988 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:47:34.716131 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:47:34.719997 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:47:34.725609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:47:34.725859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:47:34.727507 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:47:34.727680 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:47:34.729230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:47:34.729380 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:47:34.732410 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:47:34.733124 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:47:34.734646 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:47:34.736172 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:47:34.737414 augenrules[1427]: No rules Dec 12 17:47:34.738369 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:47:34.738541 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:47:34.746688 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:47:34.750383 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:47:34.750552 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:47:34.752072 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:47:34.754556 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:47:34.755601 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:47:34.767082 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:47:34.768921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:47:34.773548 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:47:34.795515 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:47:34.856267 systemd-networkd[1409]: lo: Link UP Dec 12 17:47:34.856276 systemd-networkd[1409]: lo: Gained carrier Dec 12 17:47:34.857089 systemd-networkd[1409]: Enumeration completed Dec 12 17:47:34.857193 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:47:34.857489 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:47:34.857499 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:47:34.857958 systemd-networkd[1409]: eth0: Link UP Dec 12 17:47:34.858063 systemd-networkd[1409]: eth0: Gained carrier Dec 12 17:47:34.858083 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:47:34.858155 systemd-resolved[1411]: Positive Trust Anchors: Dec 12 17:47:34.858168 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:47:34.858199 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:47:34.859508 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:47:34.861725 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:47:34.862795 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:47:34.864045 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:47:34.864179 systemd-resolved[1411]: Defaulting to hostname 'linux'. Dec 12 17:47:34.865464 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:47:34.866564 systemd[1]: Reached target network.target - Network. Dec 12 17:47:34.867444 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:47:34.868739 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:47:34.869839 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:47:34.870962 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:47:34.872357 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:47:34.873548 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:47:34.874846 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:47:34.875781 systemd-networkd[1409]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:47:34.876020 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:47:34.876052 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:47:34.876483 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Dec 12 17:47:34.876855 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:47:34.878007 systemd-timesyncd[1414]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:47:34.878129 systemd-timesyncd[1414]: Initial clock synchronization to Fri 2025-12-12 17:47:34.633845 UTC. Dec 12 17:47:34.878523 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:47:34.880899 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:47:34.883527 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:47:34.884909 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:47:34.886036 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:47:34.888907 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:47:34.890084 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:47:34.892063 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:47:34.893354 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:47:34.894880 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:47:34.895806 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:47:34.896676 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:47:34.896725 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:47:34.897685 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:47:34.899515 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:47:34.901483 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:47:34.903445 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:47:34.905332 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:47:34.906362 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:47:34.909195 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:47:34.910263 jq[1465]: false Dec 12 17:47:34.911070 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:47:34.914913 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:47:34.917150 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:47:34.919868 extend-filesystems[1466]: Found /dev/vda6 Dec 12 17:47:34.921548 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:47:34.923192 extend-filesystems[1466]: Found /dev/vda9 Dec 12 17:47:34.924579 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:47:34.925040 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:47:34.925854 extend-filesystems[1466]: Checking size of /dev/vda9 Dec 12 17:47:34.925974 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:47:34.930925 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:47:34.933934 extend-filesystems[1466]: Resized partition /dev/vda9 Dec 12 17:47:34.934833 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:47:34.936200 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:47:34.936357 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:47:34.936584 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:47:34.936807 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:47:34.938868 extend-filesystems[1492]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:47:34.943362 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:47:34.943551 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:47:34.946731 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 17:47:34.947230 jq[1487]: true Dec 12 17:47:34.965434 tar[1493]: linux-arm64/LICENSE Dec 12 17:47:34.966195 tar[1493]: linux-arm64/helm Dec 12 17:47:34.966028 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 17:47:34.969305 jq[1495]: true Dec 12 17:47:34.994733 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 17:47:34.998752 update_engine[1482]: I20251212 17:47:34.997575 1482 main.cc:92] Flatcar Update Engine starting Dec 12 17:47:35.001310 dbus-daemon[1463]: [system] SELinux support is enabled Dec 12 17:47:35.002204 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:47:35.010142 extend-filesystems[1492]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:47:35.010142 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:47:35.010142 extend-filesystems[1492]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 17:47:35.006145 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:47:35.023123 bash[1521]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:47:35.023204 extend-filesystems[1466]: Resized filesystem in /dev/vda9 Dec 12 17:47:35.030351 update_engine[1482]: I20251212 17:47:35.013836 1482 update_check_scheduler.cc:74] Next update check in 3m44s Dec 12 17:47:35.006171 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:47:35.007620 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:47:35.007637 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:47:35.010068 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:47:35.010260 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:47:35.014824 systemd-logind[1476]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:47:35.017954 systemd-logind[1476]: New seat seat0. Dec 12 17:47:35.019863 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:47:35.022156 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:47:35.023433 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:47:35.027009 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:47:35.029362 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:47:35.078134 locksmithd[1526]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:47:35.147082 containerd[1498]: time="2025-12-12T17:47:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:47:35.148872 containerd[1498]: time="2025-12-12T17:47:35.147734984Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 17:47:35.162184 containerd[1498]: time="2025-12-12T17:47:35.162142341Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.129µs" Dec 12 17:47:35.162237 containerd[1498]: time="2025-12-12T17:47:35.162225833Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:47:35.162255 containerd[1498]: time="2025-12-12T17:47:35.162248170Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:47:35.162566 containerd[1498]: time="2025-12-12T17:47:35.162533705Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:47:35.162589 containerd[1498]: time="2025-12-12T17:47:35.162572019Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:47:35.162622 containerd[1498]: time="2025-12-12T17:47:35.162599358Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:47:35.162766 containerd[1498]: time="2025-12-12T17:47:35.162742416Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:47:35.162790 containerd[1498]: time="2025-12-12T17:47:35.162767429Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:47:35.163240 containerd[1498]: time="2025-12-12T17:47:35.163159956Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:47:35.163291 containerd[1498]: time="2025-12-12T17:47:35.163238756Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:47:35.163315 containerd[1498]: time="2025-12-12T17:47:35.163294134Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:47:35.163315 containerd[1498]: time="2025-12-12T17:47:35.163304178Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:47:35.163512 containerd[1498]: time="2025-12-12T17:47:35.163437735Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:47:35.163946 containerd[1498]: time="2025-12-12T17:47:35.163917593Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:47:35.164049 containerd[1498]: time="2025-12-12T17:47:35.163975452Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:47:35.164073 containerd[1498]: time="2025-12-12T17:47:35.164047040Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:47:35.164147 containerd[1498]: time="2025-12-12T17:47:35.164085005Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:47:35.165914 containerd[1498]: time="2025-12-12T17:47:35.165881237Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:47:35.166009 containerd[1498]: time="2025-12-12T17:47:35.165989510Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:47:35.169225 containerd[1498]: time="2025-12-12T17:47:35.169196311Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:47:35.169264 containerd[1498]: time="2025-12-12T17:47:35.169249284Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:47:35.169287 containerd[1498]: time="2025-12-12T17:47:35.169267083Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:47:35.169287 containerd[1498]: time="2025-12-12T17:47:35.169279570Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:47:35.169332 containerd[1498]: time="2025-12-12T17:47:35.169290661Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:47:35.169332 containerd[1498]: time="2025-12-12T17:47:35.169300162Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:47:35.169332 containerd[1498]: time="2025-12-12T17:47:35.169311564Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:47:35.169332 containerd[1498]: time="2025-12-12T17:47:35.169322461Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:47:35.169398 containerd[1498]: time="2025-12-12T17:47:35.169332621Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:47:35.169398 containerd[1498]: time="2025-12-12T17:47:35.169349102Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:47:35.169398 containerd[1498]: time="2025-12-12T17:47:35.169359185Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:47:35.169398 containerd[1498]: time="2025-12-12T17:47:35.169371827Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:47:35.169505 containerd[1498]: time="2025-12-12T17:47:35.169484870Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:47:35.169530 containerd[1498]: time="2025-12-12T17:47:35.169516746Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:47:35.169547 containerd[1498]: time="2025-12-12T17:47:35.169532258Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:47:35.169547 containerd[1498]: time="2025-12-12T17:47:35.169542923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:47:35.169582 containerd[1498]: time="2025-12-12T17:47:35.169553199Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:47:35.169582 containerd[1498]: time="2025-12-12T17:47:35.169563321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:47:35.169582 containerd[1498]: time="2025-12-12T17:47:35.169574567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:47:35.169631 containerd[1498]: time="2025-12-12T17:47:35.169587131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:47:35.169631 containerd[1498]: time="2025-12-12T17:47:35.169598261Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:47:35.169631 containerd[1498]: time="2025-12-12T17:47:35.169607917Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:47:35.169631 containerd[1498]: time="2025-12-12T17:47:35.169618155Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:47:35.169816 containerd[1498]: time="2025-12-12T17:47:35.169799450Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:47:35.169842 containerd[1498]: time="2025-12-12T17:47:35.169819266Z" level=info msg="Start snapshots syncer" Dec 12 17:47:35.169842 containerd[1498]: time="2025-12-12T17:47:35.169836290Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:47:35.170104 containerd[1498]: time="2025-12-12T17:47:35.170069550Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:47:35.170188 containerd[1498]: time="2025-12-12T17:47:35.170120312Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:47:35.170188 containerd[1498]: time="2025-12-12T17:47:35.170161031Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:47:35.170274 containerd[1498]: time="2025-12-12T17:47:35.170253985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:47:35.170299 containerd[1498]: time="2025-12-12T17:47:35.170285591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:47:35.170299 containerd[1498]: time="2025-12-12T17:47:35.170296604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:47:35.170348 containerd[1498]: time="2025-12-12T17:47:35.170306299Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:47:35.170348 containerd[1498]: time="2025-12-12T17:47:35.170324991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:47:35.170348 containerd[1498]: time="2025-12-12T17:47:35.170336857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:47:35.170348 containerd[1498]: time="2025-12-12T17:47:35.170347444Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:47:35.170409 containerd[1498]: time="2025-12-12T17:47:35.170368967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:47:35.170409 containerd[1498]: time="2025-12-12T17:47:35.170379515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:47:35.170409 containerd[1498]: time="2025-12-12T17:47:35.170389985Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:47:35.170457 containerd[1498]: time="2025-12-12T17:47:35.170419613Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:47:35.170457 containerd[1498]: time="2025-12-12T17:47:35.170432449Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:47:35.170457 containerd[1498]: time="2025-12-12T17:47:35.170440050Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:47:35.170457 containerd[1498]: time="2025-12-12T17:47:35.170448892Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:47:35.170457 containerd[1498]: time="2025-12-12T17:47:35.170456997Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:47:35.170540 containerd[1498]: time="2025-12-12T17:47:35.170466963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:47:35.170540 containerd[1498]: time="2025-12-12T17:47:35.170477239Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:47:35.170571 containerd[1498]: time="2025-12-12T17:47:35.170550378Z" level=info msg="runtime interface created" Dec 12 17:47:35.170571 containerd[1498]: time="2025-12-12T17:47:35.170555768Z" level=info msg="created NRI interface" Dec 12 17:47:35.170571 containerd[1498]: time="2025-12-12T17:47:35.170563796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:47:35.170615 containerd[1498]: time="2025-12-12T17:47:35.170574033Z" level=info msg="Connect containerd service" Dec 12 17:47:35.170615 containerd[1498]: time="2025-12-12T17:47:35.170593384Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:47:35.173713 containerd[1498]: time="2025-12-12T17:47:35.171240500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:47:35.246322 containerd[1498]: time="2025-12-12T17:47:35.246282314Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:47:35.246405 containerd[1498]: time="2025-12-12T17:47:35.246345253Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:47:35.246405 containerd[1498]: time="2025-12-12T17:47:35.246373058Z" level=info msg="Start subscribing containerd event" Dec 12 17:47:35.246437 containerd[1498]: time="2025-12-12T17:47:35.246412575Z" level=info msg="Start recovering state" Dec 12 17:47:35.246501 containerd[1498]: time="2025-12-12T17:47:35.246483929Z" level=info msg="Start event monitor" Dec 12 17:47:35.246533 containerd[1498]: time="2025-12-12T17:47:35.246502427Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:47:35.246533 containerd[1498]: time="2025-12-12T17:47:35.246510765Z" level=info msg="Start streaming server" Dec 12 17:47:35.246533 containerd[1498]: time="2025-12-12T17:47:35.246518094Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:47:35.246533 containerd[1498]: time="2025-12-12T17:47:35.246526122Z" level=info msg="runtime interface starting up..." Dec 12 17:47:35.246533 containerd[1498]: time="2025-12-12T17:47:35.246532908Z" level=info msg="starting plugins..." Dec 12 17:47:35.246606 containerd[1498]: time="2025-12-12T17:47:35.246546830Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:47:35.247943 containerd[1498]: time="2025-12-12T17:47:35.246645369Z" level=info msg="containerd successfully booted in 0.099956s" Dec 12 17:47:35.246745 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:47:35.285596 tar[1493]: linux-arm64/README.md Dec 12 17:47:35.302820 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:47:35.386251 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:47:35.404691 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:47:35.407117 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:47:35.427072 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:47:35.427304 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:47:35.429805 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:47:35.448582 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:47:35.453818 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:47:35.456058 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:47:35.457292 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:47:35.906835 systemd-networkd[1409]: eth0: Gained IPv6LL Dec 12 17:47:35.912276 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:47:35.913952 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:47:35.916265 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:47:35.918620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:47:35.930513 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:47:35.960933 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:47:35.963264 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:47:35.963489 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:47:35.965427 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:47:36.497176 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:47:36.498700 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:47:36.500528 systemd[1]: Startup finished in 2.092s (kernel) + 4.704s (initrd) + 3.236s (userspace) = 10.032s. Dec 12 17:47:36.501073 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:47:36.842192 kubelet[1596]: E1212 17:47:36.842068 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:47:36.844473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:47:36.844604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:47:36.844939 systemd[1]: kubelet.service: Consumed 746ms CPU time, 257.9M memory peak. Dec 12 17:47:41.744055 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:47:41.745101 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:43582.service - OpenSSH per-connection server daemon (10.0.0.1:43582). Dec 12 17:47:41.822554 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 43582 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:47:41.824576 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:47:41.835328 systemd-logind[1476]: New session 1 of user core. Dec 12 17:47:41.836302 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:47:41.837304 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:47:41.863506 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:47:41.865688 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:47:41.892896 (systemd)[1614]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:47:41.895247 systemd-logind[1476]: New session c1 of user core. Dec 12 17:47:41.993664 systemd[1614]: Queued start job for default target default.target. Dec 12 17:47:42.016049 systemd[1614]: Created slice app.slice - User Application Slice. Dec 12 17:47:42.016079 systemd[1614]: Reached target paths.target - Paths. Dec 12 17:47:42.016114 systemd[1614]: Reached target timers.target - Timers. Dec 12 17:47:42.017283 systemd[1614]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:47:42.026918 systemd[1614]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:47:42.026977 systemd[1614]: Reached target sockets.target - Sockets. Dec 12 17:47:42.027013 systemd[1614]: Reached target basic.target - Basic System. Dec 12 17:47:42.027040 systemd[1614]: Reached target default.target - Main User Target. Dec 12 17:47:42.027063 systemd[1614]: Startup finished in 125ms. Dec 12 17:47:42.027110 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:47:42.028276 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:47:42.094727 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:43594.service - OpenSSH per-connection server daemon (10.0.0.1:43594). Dec 12 17:47:42.138752 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 43594 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:47:42.139933 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:47:42.144705 systemd-logind[1476]: New session 2 of user core. Dec 12 17:47:42.154905 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:47:42.204134 sshd[1628]: Connection closed by 10.0.0.1 port 43594 Dec 12 17:47:42.204541 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Dec 12 17:47:42.218547 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:43594.service: Deactivated successfully. Dec 12 17:47:42.220984 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:47:42.221609 systemd-logind[1476]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:47:42.223594 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:43598.service - OpenSSH per-connection server daemon (10.0.0.1:43598). Dec 12 17:47:42.224214 systemd-logind[1476]: Removed session 2. Dec 12 17:47:42.265874 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 43598 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:47:42.267114 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:47:42.271519 systemd-logind[1476]: New session 3 of user core. Dec 12 17:47:42.283871 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:47:42.330479 sshd[1637]: Connection closed by 10.0.0.1 port 43598 Dec 12 17:47:42.330674 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Dec 12 17:47:42.343759 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:43598.service: Deactivated successfully. Dec 12 17:47:42.345060 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:47:42.346883 systemd-logind[1476]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:47:42.348843 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:43604.service - OpenSSH per-connection server daemon (10.0.0.1:43604). Dec 12 17:47:42.349807 systemd-logind[1476]: Removed session 3. Dec 12 17:47:42.399864 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 43604 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:47:42.401281 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:47:42.406000 systemd-logind[1476]: New session 4 of user core. Dec 12 17:47:42.415880 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:47:42.467506 sshd[1646]: Connection closed by 10.0.0.1 port 43604 Dec 12 17:47:42.467846 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Dec 12 17:47:42.481545 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:43604.service: Deactivated successfully. Dec 12 17:47:42.484069 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:47:42.484822 systemd-logind[1476]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:47:42.486927 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:43618.service - OpenSSH per-connection server daemon (10.0.0.1:43618). Dec 12 17:47:42.487546 systemd-logind[1476]: Removed session 4. Dec 12 17:47:42.542312 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 43618 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:47:42.543968 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:47:42.547924 systemd-logind[1476]: New session 5 of user core. Dec 12 17:47:42.558914 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:47:42.618283 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:47:42.618577 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:47:42.636650 sudo[1657]: pam_unix(sudo:session): session closed for user root Dec 12 17:47:42.638213 sshd[1656]: Connection closed by 10.0.0.1 port 43618 Dec 12 17:47:42.638815 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Dec 12 17:47:42.647893 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:43618.service: Deactivated successfully. Dec 12 17:47:42.649447 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:47:42.651327 systemd-logind[1476]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:47:42.653962 systemd[1]: Started sshd@5-10.0.0.131:22-10.0.0.1:43628.service - OpenSSH per-connection server daemon (10.0.0.1:43628). Dec 12 17:47:42.654381 systemd-logind[1476]: Removed session 5. Dec 12 17:47:42.707793 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 43628 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:47:42.709254 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:47:42.713314 systemd-logind[1476]: New session 6 of user core. Dec 12 17:47:42.723888 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:47:42.774361 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:47:42.774656 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:47:42.848254 sudo[1668]: pam_unix(sudo:session): session closed for user root Dec 12 17:47:42.853144 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:47:42.853404 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:47:42.862768 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:47:42.898362 augenrules[1690]: No rules Dec 12 17:47:42.899866 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:47:42.900812 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:47:42.902608 sudo[1667]: pam_unix(sudo:session): session closed for user root Dec 12 17:47:42.904522 sshd[1666]: Connection closed by 10.0.0.1 port 43628 Dec 12 17:47:42.904375 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Dec 12 17:47:42.912333 systemd[1]: sshd@5-10.0.0.131:22-10.0.0.1:43628.service: Deactivated successfully. Dec 12 17:47:42.914093 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:47:42.916170 systemd-logind[1476]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:47:42.918842 systemd[1]: Started sshd@6-10.0.0.131:22-10.0.0.1:43636.service - OpenSSH per-connection server daemon (10.0.0.1:43636). Dec 12 17:47:42.919606 systemd-logind[1476]: Removed session 6. Dec 12 17:47:42.986269 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 43636 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:47:42.987681 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:47:42.991965 systemd-logind[1476]: New session 7 of user core. Dec 12 17:47:43.008942 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:47:43.058349 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:47:43.058937 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:47:43.328970 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:47:43.346093 (dockerd)[1724]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:47:43.550945 dockerd[1724]: time="2025-12-12T17:47:43.550880659Z" level=info msg="Starting up" Dec 12 17:47:43.551699 dockerd[1724]: time="2025-12-12T17:47:43.551677853Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:47:43.562210 dockerd[1724]: time="2025-12-12T17:47:43.562173575Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:47:43.592349 dockerd[1724]: time="2025-12-12T17:47:43.592059229Z" level=info msg="Loading containers: start." Dec 12 17:47:43.599725 kernel: Initializing XFRM netlink socket Dec 12 17:47:43.806579 systemd-networkd[1409]: docker0: Link UP Dec 12 17:47:43.811503 dockerd[1724]: time="2025-12-12T17:47:43.811460874Z" level=info msg="Loading containers: done." Dec 12 17:47:43.823228 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1242426596-merged.mount: Deactivated successfully. Dec 12 17:47:43.828939 dockerd[1724]: time="2025-12-12T17:47:43.828585643Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:47:43.828939 dockerd[1724]: time="2025-12-12T17:47:43.828672523Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:47:43.828939 dockerd[1724]: time="2025-12-12T17:47:43.828784180Z" level=info msg="Initializing buildkit" Dec 12 17:47:43.855305 dockerd[1724]: time="2025-12-12T17:47:43.855205457Z" level=info msg="Completed buildkit initialization" Dec 12 17:47:43.861601 dockerd[1724]: time="2025-12-12T17:47:43.861552809Z" level=info msg="Daemon has completed initialization" Dec 12 17:47:43.861821 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:47:43.861910 dockerd[1724]: time="2025-12-12T17:47:43.861774461Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:47:44.400747 containerd[1498]: time="2025-12-12T17:47:44.400625070Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Dec 12 17:47:45.092529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3140955483.mount: Deactivated successfully. Dec 12 17:47:46.121052 containerd[1498]: time="2025-12-12T17:47:46.121004262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:46.121834 containerd[1498]: time="2025-12-12T17:47:46.121805356Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387283" Dec 12 17:47:46.122385 containerd[1498]: time="2025-12-12T17:47:46.122342239Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:46.124726 containerd[1498]: time="2025-12-12T17:47:46.124683718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:46.126515 containerd[1498]: time="2025-12-12T17:47:46.126471394Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.725802392s" Dec 12 17:47:46.126555 containerd[1498]: time="2025-12-12T17:47:46.126515402Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Dec 12 17:47:46.127716 containerd[1498]: time="2025-12-12T17:47:46.127678059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Dec 12 17:47:47.095037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:47:47.097203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:47:47.254159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:47:47.268034 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:47:47.308640 kubelet[2011]: E1212 17:47:47.308578 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:47:47.311437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:47:47.311553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:47:47.312516 systemd[1]: kubelet.service: Consumed 143ms CPU time, 108M memory peak. Dec 12 17:47:47.431831 containerd[1498]: time="2025-12-12T17:47:47.431505122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:47.432135 containerd[1498]: time="2025-12-12T17:47:47.431985313Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553083" Dec 12 17:47:47.433202 containerd[1498]: time="2025-12-12T17:47:47.433138542Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:47.435577 containerd[1498]: time="2025-12-12T17:47:47.435541643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:47.436946 containerd[1498]: time="2025-12-12T17:47:47.436787897Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.309063745s" Dec 12 17:47:47.436946 containerd[1498]: time="2025-12-12T17:47:47.436826339Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Dec 12 17:47:47.437302 containerd[1498]: time="2025-12-12T17:47:47.437227737Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Dec 12 17:47:48.612433 containerd[1498]: time="2025-12-12T17:47:48.612007346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:48.613350 containerd[1498]: time="2025-12-12T17:47:48.613326654Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298069" Dec 12 17:47:48.614116 containerd[1498]: time="2025-12-12T17:47:48.614086665Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:48.617307 containerd[1498]: time="2025-12-12T17:47:48.617270771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:48.619183 containerd[1498]: time="2025-12-12T17:47:48.619146232Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.181886723s" Dec 12 17:47:48.619183 containerd[1498]: time="2025-12-12T17:47:48.619179850Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Dec 12 17:47:48.619656 containerd[1498]: time="2025-12-12T17:47:48.619563854Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Dec 12 17:47:49.556479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820972445.mount: Deactivated successfully. Dec 12 17:47:49.790688 containerd[1498]: time="2025-12-12T17:47:49.790631936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:49.791172 containerd[1498]: time="2025-12-12T17:47:49.791140332Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258675" Dec 12 17:47:49.792075 containerd[1498]: time="2025-12-12T17:47:49.792017546Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:49.793949 containerd[1498]: time="2025-12-12T17:47:49.793896862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:49.794733 containerd[1498]: time="2025-12-12T17:47:49.794350756Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.174682499s" Dec 12 17:47:49.794733 containerd[1498]: time="2025-12-12T17:47:49.794379301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Dec 12 17:47:49.794896 containerd[1498]: time="2025-12-12T17:47:49.794759902Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Dec 12 17:47:50.405650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759087837.mount: Deactivated successfully. Dec 12 17:47:51.268664 containerd[1498]: time="2025-12-12T17:47:51.268601365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:51.269250 containerd[1498]: time="2025-12-12T17:47:51.269218653Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Dec 12 17:47:51.270269 containerd[1498]: time="2025-12-12T17:47:51.270239642Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:51.272679 containerd[1498]: time="2025-12-12T17:47:51.272647337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:51.274442 containerd[1498]: time="2025-12-12T17:47:51.274401157Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.479611666s" Dec 12 17:47:51.274442 containerd[1498]: time="2025-12-12T17:47:51.274438661Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Dec 12 17:47:51.274857 containerd[1498]: time="2025-12-12T17:47:51.274828731Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:47:51.695153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817295497.mount: Deactivated successfully. Dec 12 17:47:51.699279 containerd[1498]: time="2025-12-12T17:47:51.699236187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:47:51.699978 containerd[1498]: time="2025-12-12T17:47:51.699953075Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 12 17:47:51.700761 containerd[1498]: time="2025-12-12T17:47:51.700729150Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:47:51.702811 containerd[1498]: time="2025-12-12T17:47:51.702782088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:47:51.703679 containerd[1498]: time="2025-12-12T17:47:51.703367930Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 428.511778ms" Dec 12 17:47:51.703679 containerd[1498]: time="2025-12-12T17:47:51.703391684Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:47:51.703855 containerd[1498]: time="2025-12-12T17:47:51.703786855Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Dec 12 17:47:52.297597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857713029.mount: Deactivated successfully. Dec 12 17:47:53.942256 containerd[1498]: time="2025-12-12T17:47:53.942191261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:53.943071 containerd[1498]: time="2025-12-12T17:47:53.943025992Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013653" Dec 12 17:47:53.943737 containerd[1498]: time="2025-12-12T17:47:53.943689278Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:53.947132 containerd[1498]: time="2025-12-12T17:47:53.947103994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:47:53.948226 containerd[1498]: time="2025-12-12T17:47:53.948188914Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.244370686s" Dec 12 17:47:53.948226 containerd[1498]: time="2025-12-12T17:47:53.948222182Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Dec 12 17:47:57.562003 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:47:57.563922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:47:57.724280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:47:57.737065 (kubelet)[2176]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:47:57.772470 kubelet[2176]: E1212 17:47:57.772411 2176 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:47:57.774994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:47:57.775119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:47:57.775605 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.2M memory peak. Dec 12 17:48:00.260121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:48:00.260256 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.2M memory peak. Dec 12 17:48:00.262695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:48:00.287314 systemd[1]: Reload requested from client PID 2191 ('systemctl') (unit session-7.scope)... Dec 12 17:48:00.287330 systemd[1]: Reloading... Dec 12 17:48:00.357752 zram_generator::config[2234]: No configuration found. Dec 12 17:48:00.663371 systemd[1]: Reloading finished in 375 ms. Dec 12 17:48:00.729195 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:48:00.729270 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:48:00.729528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:48:00.729571 systemd[1]: kubelet.service: Consumed 94ms CPU time, 95M memory peak. Dec 12 17:48:00.731044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:48:00.848105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:48:00.859002 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:48:00.889166 kubelet[2279]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:48:00.889166 kubelet[2279]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:48:00.889166 kubelet[2279]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:48:00.889480 kubelet[2279]: I1212 17:48:00.889206 2279 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:48:01.344090 kubelet[2279]: I1212 17:48:01.344033 2279 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:48:01.344090 kubelet[2279]: I1212 17:48:01.344062 2279 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:48:01.344297 kubelet[2279]: I1212 17:48:01.344280 2279 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:48:01.369188 kubelet[2279]: E1212 17:48:01.369142 2279 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:48:01.369659 kubelet[2279]: I1212 17:48:01.369642 2279 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:48:01.376290 kubelet[2279]: I1212 17:48:01.376242 2279 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:48:01.380735 kubelet[2279]: I1212 17:48:01.380473 2279 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:48:01.381732 kubelet[2279]: I1212 17:48:01.381649 2279 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:48:01.381872 kubelet[2279]: I1212 17:48:01.381700 2279 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:48:01.381964 kubelet[2279]: I1212 17:48:01.381942 2279 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:48:01.381964 kubelet[2279]: I1212 17:48:01.381952 2279 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:48:01.382729 kubelet[2279]: I1212 17:48:01.382667 2279 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:48:01.386964 kubelet[2279]: I1212 17:48:01.386920 2279 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:48:01.386964 kubelet[2279]: I1212 17:48:01.386961 2279 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:48:01.387062 kubelet[2279]: I1212 17:48:01.386997 2279 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:48:01.388010 kubelet[2279]: I1212 17:48:01.387976 2279 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:48:01.390208 kubelet[2279]: I1212 17:48:01.389610 2279 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:48:01.390208 kubelet[2279]: E1212 17:48:01.390161 2279 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:48:01.390208 kubelet[2279]: E1212 17:48:01.390157 2279 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:48:01.390754 kubelet[2279]: I1212 17:48:01.390725 2279 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:48:01.390884 kubelet[2279]: W1212 17:48:01.390869 2279 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:48:01.393602 kubelet[2279]: I1212 17:48:01.393583 2279 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:48:01.393733 kubelet[2279]: I1212 17:48:01.393703 2279 server.go:1289] "Started kubelet" Dec 12 17:48:01.394964 kubelet[2279]: I1212 17:48:01.394801 2279 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:48:01.395769 kubelet[2279]: I1212 17:48:01.395353 2279 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:48:01.395769 kubelet[2279]: I1212 17:48:01.395678 2279 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:48:01.396489 kubelet[2279]: I1212 17:48:01.396456 2279 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:48:01.398779 kubelet[2279]: I1212 17:48:01.398321 2279 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:48:01.398779 kubelet[2279]: I1212 17:48:01.398459 2279 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:48:01.398779 kubelet[2279]: I1212 17:48:01.398538 2279 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:48:01.400337 kubelet[2279]: E1212 17:48:01.399250 2279 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188088feb082e46a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:48:01.393673322 +0000 UTC m=+0.531578444,LastTimestamp:2025-12-12 17:48:01.393673322 +0000 UTC m=+0.531578444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:48:01.400337 kubelet[2279]: E1212 17:48:01.400327 2279 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:48:01.401015 kubelet[2279]: I1212 17:48:01.400681 2279 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:48:01.401015 kubelet[2279]: I1212 17:48:01.400705 2279 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:48:01.401015 kubelet[2279]: E1212 17:48:01.400872 2279 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:48:01.401369 kubelet[2279]: E1212 17:48:01.401330 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="200ms" Dec 12 17:48:01.401862 kubelet[2279]: I1212 17:48:01.401837 2279 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:48:01.401932 kubelet[2279]: I1212 17:48:01.401919 2279 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:48:01.402746 kubelet[2279]: I1212 17:48:01.402617 2279 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:48:01.402838 kubelet[2279]: E1212 17:48:01.402810 2279 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:48:01.412931 kubelet[2279]: I1212 17:48:01.412903 2279 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:48:01.412931 kubelet[2279]: I1212 17:48:01.412924 2279 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:48:01.412931 kubelet[2279]: I1212 17:48:01.412944 2279 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:48:01.416489 kubelet[2279]: I1212 17:48:01.416425 2279 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:48:01.417546 kubelet[2279]: I1212 17:48:01.417516 2279 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:48:01.417546 kubelet[2279]: I1212 17:48:01.417536 2279 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:48:01.417619 kubelet[2279]: I1212 17:48:01.417554 2279 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:48:01.417619 kubelet[2279]: I1212 17:48:01.417562 2279 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:48:01.417619 kubelet[2279]: E1212 17:48:01.417597 2279 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:48:01.501089 kubelet[2279]: E1212 17:48:01.501036 2279 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:48:01.518259 kubelet[2279]: E1212 17:48:01.518220 2279 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 17:48:01.551796 kubelet[2279]: I1212 17:48:01.551761 2279 policy_none.go:49] "None policy: Start" Dec 12 17:48:01.551796 kubelet[2279]: I1212 17:48:01.551797 2279 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:48:01.551874 kubelet[2279]: I1212 17:48:01.551810 2279 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:48:01.554497 kubelet[2279]: E1212 17:48:01.554455 2279 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:48:01.557939 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:48:01.574984 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:48:01.578291 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:48:01.594968 kubelet[2279]: E1212 17:48:01.594853 2279 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:48:01.595942 kubelet[2279]: I1212 17:48:01.595905 2279 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:48:01.596019 kubelet[2279]: I1212 17:48:01.595929 2279 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:48:01.596434 kubelet[2279]: I1212 17:48:01.596370 2279 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:48:01.597500 kubelet[2279]: E1212 17:48:01.597473 2279 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:48:01.597556 kubelet[2279]: E1212 17:48:01.597519 2279 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:48:01.602190 kubelet[2279]: E1212 17:48:01.602160 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="400ms" Dec 12 17:48:01.697400 kubelet[2279]: I1212 17:48:01.697353 2279 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:48:01.697835 kubelet[2279]: E1212 17:48:01.697805 2279 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Dec 12 17:48:01.728677 systemd[1]: Created slice kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice - libcontainer container kubepods-burstable-pod6e6cfcfb327385445a9bb0d2bc2fd5d4.slice. Dec 12 17:48:01.751074 kubelet[2279]: E1212 17:48:01.751035 2279 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:48:01.754237 systemd[1]: Created slice kubepods-burstable-pod1aecf8770408cf5981bd4f194ff87394.slice - libcontainer container kubepods-burstable-pod1aecf8770408cf5981bd4f194ff87394.slice. Dec 12 17:48:01.774010 kubelet[2279]: E1212 17:48:01.773970 2279 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:48:01.776503 systemd[1]: Created slice kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice - libcontainer container kubepods-burstable-pod66e26b992bcd7ea6fb75e339cf7a3f7d.slice. Dec 12 17:48:01.778431 kubelet[2279]: E1212 17:48:01.778389 2279 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:48:01.803849 kubelet[2279]: I1212 17:48:01.803799 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:48:01.803849 kubelet[2279]: I1212 17:48:01.803842 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:48:01.803998 kubelet[2279]: I1212 17:48:01.803864 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:48:01.803998 kubelet[2279]: I1212 17:48:01.803884 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1aecf8770408cf5981bd4f194ff87394-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1aecf8770408cf5981bd4f194ff87394\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:48:01.803998 kubelet[2279]: I1212 17:48:01.803912 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1aecf8770408cf5981bd4f194ff87394-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1aecf8770408cf5981bd4f194ff87394\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:48:01.803998 kubelet[2279]: I1212 17:48:01.803926 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1aecf8770408cf5981bd4f194ff87394-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1aecf8770408cf5981bd4f194ff87394\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:48:01.803998 kubelet[2279]: I1212 17:48:01.803942 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:48:01.804100 kubelet[2279]: I1212 17:48:01.803958 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:48:01.804100 kubelet[2279]: I1212 17:48:01.803974 2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:48:01.900655 kubelet[2279]: I1212 17:48:01.900104 2279 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:48:01.900655 kubelet[2279]: E1212 17:48:01.900418 2279 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Dec 12 17:48:02.003299 kubelet[2279]: E1212 17:48:02.003234 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="800ms" Dec 12 17:48:02.053384 containerd[1498]: time="2025-12-12T17:48:02.053326755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,}" Dec 12 17:48:02.074026 containerd[1498]: time="2025-12-12T17:48:02.073973581Z" level=info msg="connecting to shim 0cba3c16fb36e1b1653b95e8fbea73fc6a94e8fad9dcdc355e414efd8fd9da28" address="unix:///run/containerd/s/e407e91e6b9a6efa32806aae1adb82ced0c415ee89e1db03e5adfa47d5337f91" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:02.075982 containerd[1498]: time="2025-12-12T17:48:02.075944825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1aecf8770408cf5981bd4f194ff87394,Namespace:kube-system,Attempt:0,}" Dec 12 17:48:02.082502 containerd[1498]: time="2025-12-12T17:48:02.080268596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,}" Dec 12 17:48:02.116947 systemd[1]: Started cri-containerd-0cba3c16fb36e1b1653b95e8fbea73fc6a94e8fad9dcdc355e414efd8fd9da28.scope - libcontainer container 0cba3c16fb36e1b1653b95e8fbea73fc6a94e8fad9dcdc355e414efd8fd9da28. Dec 12 17:48:02.125795 containerd[1498]: time="2025-12-12T17:48:02.125738022Z" level=info msg="connecting to shim c10fba1c6b3b9ea55fcd6c9725ad4ae7ea8c3acb459856baf243ffbca6ed1b46" address="unix:///run/containerd/s/5c4b62bdf9b867ac5cd3dd8b43e41f4110133fc95d0d82366f2f2072c6c21ac6" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:02.126934 containerd[1498]: time="2025-12-12T17:48:02.126900897Z" level=info msg="connecting to shim d87f85fcc3a1eaed247ad566b0f2dc5c32b8241b6e749cfacf6e1c84e65a7580" address="unix:///run/containerd/s/52a231ac47ff8860f77cce8b8c7ac670b08d17a9f1eeb31c137c91ed613016a4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:02.157880 systemd[1]: Started cri-containerd-d87f85fcc3a1eaed247ad566b0f2dc5c32b8241b6e749cfacf6e1c84e65a7580.scope - libcontainer container d87f85fcc3a1eaed247ad566b0f2dc5c32b8241b6e749cfacf6e1c84e65a7580. Dec 12 17:48:02.162269 systemd[1]: Started cri-containerd-c10fba1c6b3b9ea55fcd6c9725ad4ae7ea8c3acb459856baf243ffbca6ed1b46.scope - libcontainer container c10fba1c6b3b9ea55fcd6c9725ad4ae7ea8c3acb459856baf243ffbca6ed1b46. Dec 12 17:48:02.170090 containerd[1498]: time="2025-12-12T17:48:02.170030304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6e6cfcfb327385445a9bb0d2bc2fd5d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cba3c16fb36e1b1653b95e8fbea73fc6a94e8fad9dcdc355e414efd8fd9da28\"" Dec 12 17:48:02.177593 containerd[1498]: time="2025-12-12T17:48:02.177409740Z" level=info msg="CreateContainer within sandbox \"0cba3c16fb36e1b1653b95e8fbea73fc6a94e8fad9dcdc355e414efd8fd9da28\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:48:02.190541 containerd[1498]: time="2025-12-12T17:48:02.190504393Z" level=info msg="Container b71d41d70df7092fc17af32b0fb5fe851c21f93b8234893d22054ec82af7736d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:48:02.200185 containerd[1498]: time="2025-12-12T17:48:02.199423111Z" level=info msg="CreateContainer within sandbox \"0cba3c16fb36e1b1653b95e8fbea73fc6a94e8fad9dcdc355e414efd8fd9da28\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b71d41d70df7092fc17af32b0fb5fe851c21f93b8234893d22054ec82af7736d\"" Dec 12 17:48:02.201461 containerd[1498]: time="2025-12-12T17:48:02.201416577Z" level=info msg="StartContainer for \"b71d41d70df7092fc17af32b0fb5fe851c21f93b8234893d22054ec82af7736d\"" Dec 12 17:48:02.203869 containerd[1498]: time="2025-12-12T17:48:02.203830174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1aecf8770408cf5981bd4f194ff87394,Namespace:kube-system,Attempt:0,} returns sandbox id \"c10fba1c6b3b9ea55fcd6c9725ad4ae7ea8c3acb459856baf243ffbca6ed1b46\"" Dec 12 17:48:02.204005 containerd[1498]: time="2025-12-12T17:48:02.203980569Z" level=info msg="connecting to shim b71d41d70df7092fc17af32b0fb5fe851c21f93b8234893d22054ec82af7736d" address="unix:///run/containerd/s/e407e91e6b9a6efa32806aae1adb82ced0c415ee89e1db03e5adfa47d5337f91" protocol=ttrpc version=3 Dec 12 17:48:02.207893 containerd[1498]: time="2025-12-12T17:48:02.207862907Z" level=info msg="CreateContainer within sandbox \"c10fba1c6b3b9ea55fcd6c9725ad4ae7ea8c3acb459856baf243ffbca6ed1b46\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:48:02.210431 containerd[1498]: time="2025-12-12T17:48:02.210388931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:66e26b992bcd7ea6fb75e339cf7a3f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d87f85fcc3a1eaed247ad566b0f2dc5c32b8241b6e749cfacf6e1c84e65a7580\"" Dec 12 17:48:02.217215 containerd[1498]: time="2025-12-12T17:48:02.217184811Z" level=info msg="CreateContainer within sandbox \"d87f85fcc3a1eaed247ad566b0f2dc5c32b8241b6e749cfacf6e1c84e65a7580\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:48:02.217432 containerd[1498]: time="2025-12-12T17:48:02.217394237Z" level=info msg="Container 5cc841ec69b7083cdf54c86c15158989c8998bb0b0534355c82476bd91cd6b24: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:48:02.224372 containerd[1498]: time="2025-12-12T17:48:02.224336276Z" level=info msg="Container 3e87552a8e781b594dc8569ac135ee540dd6382555888f1ccbe397f581bdc53d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:48:02.224886 systemd[1]: Started cri-containerd-b71d41d70df7092fc17af32b0fb5fe851c21f93b8234893d22054ec82af7736d.scope - libcontainer container b71d41d70df7092fc17af32b0fb5fe851c21f93b8234893d22054ec82af7736d. Dec 12 17:48:02.228100 containerd[1498]: time="2025-12-12T17:48:02.228040482Z" level=info msg="CreateContainer within sandbox \"c10fba1c6b3b9ea55fcd6c9725ad4ae7ea8c3acb459856baf243ffbca6ed1b46\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5cc841ec69b7083cdf54c86c15158989c8998bb0b0534355c82476bd91cd6b24\"" Dec 12 17:48:02.229414 containerd[1498]: time="2025-12-12T17:48:02.229362425Z" level=info msg="StartContainer for \"5cc841ec69b7083cdf54c86c15158989c8998bb0b0534355c82476bd91cd6b24\"" Dec 12 17:48:02.229824 containerd[1498]: time="2025-12-12T17:48:02.229793427Z" level=info msg="CreateContainer within sandbox \"d87f85fcc3a1eaed247ad566b0f2dc5c32b8241b6e749cfacf6e1c84e65a7580\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3e87552a8e781b594dc8569ac135ee540dd6382555888f1ccbe397f581bdc53d\"" Dec 12 17:48:02.230521 containerd[1498]: time="2025-12-12T17:48:02.230495484Z" level=info msg="connecting to shim 5cc841ec69b7083cdf54c86c15158989c8998bb0b0534355c82476bd91cd6b24" address="unix:///run/containerd/s/5c4b62bdf9b867ac5cd3dd8b43e41f4110133fc95d0d82366f2f2072c6c21ac6" protocol=ttrpc version=3 Dec 12 17:48:02.231744 containerd[1498]: time="2025-12-12T17:48:02.231296380Z" level=info msg="StartContainer for \"3e87552a8e781b594dc8569ac135ee540dd6382555888f1ccbe397f581bdc53d\"" Dec 12 17:48:02.234057 containerd[1498]: time="2025-12-12T17:48:02.234019160Z" level=info msg="connecting to shim 3e87552a8e781b594dc8569ac135ee540dd6382555888f1ccbe397f581bdc53d" address="unix:///run/containerd/s/52a231ac47ff8860f77cce8b8c7ac670b08d17a9f1eeb31c137c91ed613016a4" protocol=ttrpc version=3 Dec 12 17:48:02.249090 systemd[1]: Started cri-containerd-5cc841ec69b7083cdf54c86c15158989c8998bb0b0534355c82476bd91cd6b24.scope - libcontainer container 5cc841ec69b7083cdf54c86c15158989c8998bb0b0534355c82476bd91cd6b24. Dec 12 17:48:02.257897 systemd[1]: Started cri-containerd-3e87552a8e781b594dc8569ac135ee540dd6382555888f1ccbe397f581bdc53d.scope - libcontainer container 3e87552a8e781b594dc8569ac135ee540dd6382555888f1ccbe397f581bdc53d. Dec 12 17:48:02.289146 containerd[1498]: time="2025-12-12T17:48:02.289031866Z" level=info msg="StartContainer for \"b71d41d70df7092fc17af32b0fb5fe851c21f93b8234893d22054ec82af7736d\" returns successfully" Dec 12 17:48:02.303852 kubelet[2279]: I1212 17:48:02.303558 2279 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:48:02.303947 kubelet[2279]: E1212 17:48:02.303921 2279 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Dec 12 17:48:02.305754 containerd[1498]: time="2025-12-12T17:48:02.305691280Z" level=info msg="StartContainer for \"5cc841ec69b7083cdf54c86c15158989c8998bb0b0534355c82476bd91cd6b24\" returns successfully" Dec 12 17:48:02.318740 containerd[1498]: time="2025-12-12T17:48:02.317644081Z" level=info msg="StartContainer for \"3e87552a8e781b594dc8569ac135ee540dd6382555888f1ccbe397f581bdc53d\" returns successfully" Dec 12 17:48:02.361328 kubelet[2279]: E1212 17:48:02.361257 2279 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:48:02.431743 kubelet[2279]: E1212 17:48:02.431379 2279 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:48:02.432844 kubelet[2279]: E1212 17:48:02.432820 2279 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:48:02.438523 kubelet[2279]: E1212 17:48:02.438495 2279 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:48:03.105631 kubelet[2279]: I1212 17:48:03.105602 2279 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:48:03.439469 kubelet[2279]: E1212 17:48:03.439353 2279 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:48:03.439575 kubelet[2279]: E1212 17:48:03.439518 2279 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:48:03.863116 kubelet[2279]: E1212 17:48:03.863062 2279 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:48:03.930639 kubelet[2279]: I1212 17:48:03.930600 2279 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:48:04.001273 kubelet[2279]: I1212 17:48:04.001232 2279 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:48:04.011787 kubelet[2279]: E1212 17:48:04.011756 2279 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:48:04.011787 kubelet[2279]: I1212 17:48:04.011788 2279 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:48:04.013672 kubelet[2279]: E1212 17:48:04.013646 2279 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:48:04.013672 kubelet[2279]: I1212 17:48:04.013671 2279 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:48:04.017315 kubelet[2279]: E1212 17:48:04.017287 2279 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:48:04.390451 kubelet[2279]: I1212 17:48:04.390410 2279 apiserver.go:52] "Watching apiserver" Dec 12 17:48:04.401059 kubelet[2279]: I1212 17:48:04.400992 2279 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:48:05.795279 systemd[1]: Reload requested from client PID 2562 ('systemctl') (unit session-7.scope)... Dec 12 17:48:05.795295 systemd[1]: Reloading... Dec 12 17:48:05.854758 zram_generator::config[2604]: No configuration found. Dec 12 17:48:06.019620 systemd[1]: Reloading finished in 224 ms. Dec 12 17:48:06.045903 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:48:06.065594 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:48:06.065876 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:48:06.065934 systemd[1]: kubelet.service: Consumed 892ms CPU time, 128.5M memory peak. Dec 12 17:48:06.067572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:48:06.206186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:48:06.210654 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:48:06.254240 kubelet[2647]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:48:06.254240 kubelet[2647]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:48:06.254240 kubelet[2647]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:48:06.254560 kubelet[2647]: I1212 17:48:06.254263 2647 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:48:06.259933 kubelet[2647]: I1212 17:48:06.259869 2647 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Dec 12 17:48:06.260506 kubelet[2647]: I1212 17:48:06.260052 2647 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:48:06.260746 kubelet[2647]: I1212 17:48:06.260726 2647 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:48:06.262258 kubelet[2647]: I1212 17:48:06.262233 2647 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:48:06.265501 kubelet[2647]: I1212 17:48:06.265477 2647 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:48:06.268878 kubelet[2647]: I1212 17:48:06.268850 2647 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:48:06.271766 kubelet[2647]: I1212 17:48:06.271743 2647 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:48:06.272029 kubelet[2647]: I1212 17:48:06.271995 2647 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:48:06.272197 kubelet[2647]: I1212 17:48:06.272031 2647 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:48:06.272275 kubelet[2647]: I1212 17:48:06.272207 2647 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:48:06.272275 kubelet[2647]: I1212 17:48:06.272217 2647 container_manager_linux.go:303] "Creating device plugin manager" Dec 12 17:48:06.272275 kubelet[2647]: I1212 17:48:06.272267 2647 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:48:06.272435 kubelet[2647]: I1212 17:48:06.272422 2647 kubelet.go:480] "Attempting to sync node with API server" Dec 12 17:48:06.272465 kubelet[2647]: I1212 17:48:06.272439 2647 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:48:06.272465 kubelet[2647]: I1212 17:48:06.272463 2647 kubelet.go:386] "Adding apiserver pod source" Dec 12 17:48:06.272509 kubelet[2647]: I1212 17:48:06.272477 2647 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:48:06.273515 kubelet[2647]: I1212 17:48:06.273408 2647 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:48:06.279014 kubelet[2647]: I1212 17:48:06.278986 2647 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:48:06.282141 kubelet[2647]: I1212 17:48:06.282109 2647 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:48:06.282216 kubelet[2647]: I1212 17:48:06.282159 2647 server.go:1289] "Started kubelet" Dec 12 17:48:06.284004 kubelet[2647]: I1212 17:48:06.283966 2647 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:48:06.285233 kubelet[2647]: I1212 17:48:06.285185 2647 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:48:06.285473 kubelet[2647]: I1212 17:48:06.285456 2647 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:48:06.286900 kubelet[2647]: I1212 17:48:06.286703 2647 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:48:06.287975 kubelet[2647]: I1212 17:48:06.287949 2647 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:48:06.289176 kubelet[2647]: I1212 17:48:06.289156 2647 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:48:06.289366 kubelet[2647]: E1212 17:48:06.289348 2647 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:48:06.290634 kubelet[2647]: I1212 17:48:06.290290 2647 server.go:317] "Adding debug handlers to kubelet server" Dec 12 17:48:06.290963 kubelet[2647]: I1212 17:48:06.290942 2647 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:48:06.291151 kubelet[2647]: I1212 17:48:06.291138 2647 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:48:06.295630 kubelet[2647]: I1212 17:48:06.295591 2647 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:48:06.299801 kubelet[2647]: I1212 17:48:06.298163 2647 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:48:06.299801 kubelet[2647]: I1212 17:48:06.298186 2647 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:48:06.299801 kubelet[2647]: E1212 17:48:06.299270 2647 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:48:06.305893 kubelet[2647]: I1212 17:48:06.305852 2647 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 12 17:48:06.306842 kubelet[2647]: I1212 17:48:06.306816 2647 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 12 17:48:06.306875 kubelet[2647]: I1212 17:48:06.306844 2647 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 12 17:48:06.306875 kubelet[2647]: I1212 17:48:06.306868 2647 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:48:06.306929 kubelet[2647]: I1212 17:48:06.306876 2647 kubelet.go:2436] "Starting kubelet main sync loop" Dec 12 17:48:06.306950 kubelet[2647]: E1212 17:48:06.306927 2647 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:48:06.339761 kubelet[2647]: I1212 17:48:06.339732 2647 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:48:06.339761 kubelet[2647]: I1212 17:48:06.339753 2647 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:48:06.339908 kubelet[2647]: I1212 17:48:06.339772 2647 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:48:06.339933 kubelet[2647]: I1212 17:48:06.339912 2647 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:48:06.339964 kubelet[2647]: I1212 17:48:06.339924 2647 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:48:06.339964 kubelet[2647]: I1212 17:48:06.339940 2647 policy_none.go:49] "None policy: Start" Dec 12 17:48:06.339964 kubelet[2647]: I1212 17:48:06.339949 2647 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:48:06.339964 kubelet[2647]: I1212 17:48:06.339958 2647 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:48:06.340058 kubelet[2647]: I1212 17:48:06.340041 2647 state_mem.go:75] "Updated machine memory state" Dec 12 17:48:06.344176 kubelet[2647]: E1212 17:48:06.343595 2647 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:48:06.344176 kubelet[2647]: I1212 17:48:06.343789 2647 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:48:06.344176 kubelet[2647]: I1212 17:48:06.343802 2647 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:48:06.344176 kubelet[2647]: I1212 17:48:06.343961 2647 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:48:06.345208 kubelet[2647]: E1212 17:48:06.345180 2647 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:48:06.407906 kubelet[2647]: I1212 17:48:06.407867 2647 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:48:06.407906 kubelet[2647]: I1212 17:48:06.407887 2647 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:48:06.408152 kubelet[2647]: I1212 17:48:06.408135 2647 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:48:06.446627 kubelet[2647]: I1212 17:48:06.446586 2647 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:48:06.452750 kubelet[2647]: I1212 17:48:06.452693 2647 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:48:06.452875 kubelet[2647]: I1212 17:48:06.452803 2647 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:48:06.493193 kubelet[2647]: I1212 17:48:06.493062 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1aecf8770408cf5981bd4f194ff87394-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1aecf8770408cf5981bd4f194ff87394\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:48:06.493193 kubelet[2647]: I1212 17:48:06.493105 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1aecf8770408cf5981bd4f194ff87394-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1aecf8770408cf5981bd4f194ff87394\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:48:06.493193 kubelet[2647]: I1212 17:48:06.493136 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:48:06.493193 kubelet[2647]: I1212 17:48:06.493151 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:48:06.493193 kubelet[2647]: I1212 17:48:06.493174 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:48:06.493420 kubelet[2647]: I1212 17:48:06.493226 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:48:06.493420 kubelet[2647]: I1212 17:48:06.493262 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1aecf8770408cf5981bd4f194ff87394-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1aecf8770408cf5981bd4f194ff87394\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:48:06.493420 kubelet[2647]: I1212 17:48:06.493280 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/66e26b992bcd7ea6fb75e339cf7a3f7d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"66e26b992bcd7ea6fb75e339cf7a3f7d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:48:06.493420 kubelet[2647]: I1212 17:48:06.493298 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e6cfcfb327385445a9bb0d2bc2fd5d4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6e6cfcfb327385445a9bb0d2bc2fd5d4\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:48:07.273523 kubelet[2647]: I1212 17:48:07.273439 2647 apiserver.go:52] "Watching apiserver" Dec 12 17:48:07.291217 kubelet[2647]: I1212 17:48:07.291172 2647 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:48:07.329141 kubelet[2647]: I1212 17:48:07.329108 2647 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:48:07.329141 kubelet[2647]: I1212 17:48:07.329136 2647 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:48:07.335740 kubelet[2647]: E1212 17:48:07.333897 2647 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 17:48:07.335740 kubelet[2647]: E1212 17:48:07.334235 2647 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:48:07.377336 kubelet[2647]: I1212 17:48:07.376516 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.376487067 podStartE2EDuration="1.376487067s" podCreationTimestamp="2025-12-12 17:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:48:07.376442905 +0000 UTC m=+1.162025119" watchObservedRunningTime="2025-12-12 17:48:07.376487067 +0000 UTC m=+1.162069281" Dec 12 17:48:07.396815 kubelet[2647]: I1212 17:48:07.396760 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.396744661 podStartE2EDuration="1.396744661s" podCreationTimestamp="2025-12-12 17:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:48:07.396697659 +0000 UTC m=+1.182279873" watchObservedRunningTime="2025-12-12 17:48:07.396744661 +0000 UTC m=+1.182326875" Dec 12 17:48:07.413106 kubelet[2647]: I1212 17:48:07.413042 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4130251 podStartE2EDuration="1.4130251s" podCreationTimestamp="2025-12-12 17:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:48:07.405442608 +0000 UTC m=+1.191024822" watchObservedRunningTime="2025-12-12 17:48:07.4130251 +0000 UTC m=+1.198607274" Dec 12 17:48:12.399049 kubelet[2647]: I1212 17:48:12.399005 2647 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:48:12.399592 kubelet[2647]: I1212 17:48:12.399440 2647 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:48:12.399622 containerd[1498]: time="2025-12-12T17:48:12.399282074Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:48:12.445154 systemd[1]: Created slice kubepods-besteffort-pod760c3706_9c3f_4bcd_8517_d08a6fc58514.slice - libcontainer container kubepods-besteffort-pod760c3706_9c3f_4bcd_8517_d08a6fc58514.slice. Dec 12 17:48:12.535697 kubelet[2647]: I1212 17:48:12.535633 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/760c3706-9c3f-4bcd-8517-d08a6fc58514-xtables-lock\") pod \"kube-proxy-ls9sv\" (UID: \"760c3706-9c3f-4bcd-8517-d08a6fc58514\") " pod="kube-system/kube-proxy-ls9sv" Dec 12 17:48:12.535697 kubelet[2647]: I1212 17:48:12.535687 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/760c3706-9c3f-4bcd-8517-d08a6fc58514-lib-modules\") pod \"kube-proxy-ls9sv\" (UID: \"760c3706-9c3f-4bcd-8517-d08a6fc58514\") " pod="kube-system/kube-proxy-ls9sv" Dec 12 17:48:12.535898 kubelet[2647]: I1212 17:48:12.535743 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/760c3706-9c3f-4bcd-8517-d08a6fc58514-kube-proxy\") pod \"kube-proxy-ls9sv\" (UID: \"760c3706-9c3f-4bcd-8517-d08a6fc58514\") " pod="kube-system/kube-proxy-ls9sv" Dec 12 17:48:12.535898 kubelet[2647]: I1212 17:48:12.535790 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpp7j\" (UniqueName: \"kubernetes.io/projected/760c3706-9c3f-4bcd-8517-d08a6fc58514-kube-api-access-jpp7j\") pod \"kube-proxy-ls9sv\" (UID: \"760c3706-9c3f-4bcd-8517-d08a6fc58514\") " pod="kube-system/kube-proxy-ls9sv" Dec 12 17:48:12.643340 kubelet[2647]: E1212 17:48:12.643293 2647 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 12 17:48:12.643340 kubelet[2647]: E1212 17:48:12.643325 2647 projected.go:194] Error preparing data for projected volume kube-api-access-jpp7j for pod kube-system/kube-proxy-ls9sv: configmap "kube-root-ca.crt" not found Dec 12 17:48:12.643489 kubelet[2647]: E1212 17:48:12.643386 2647 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/760c3706-9c3f-4bcd-8517-d08a6fc58514-kube-api-access-jpp7j podName:760c3706-9c3f-4bcd-8517-d08a6fc58514 nodeName:}" failed. No retries permitted until 2025-12-12 17:48:13.143365914 +0000 UTC m=+6.928948128 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jpp7j" (UniqueName: "kubernetes.io/projected/760c3706-9c3f-4bcd-8517-d08a6fc58514-kube-api-access-jpp7j") pod "kube-proxy-ls9sv" (UID: "760c3706-9c3f-4bcd-8517-d08a6fc58514") : configmap "kube-root-ca.crt" not found Dec 12 17:48:13.365003 containerd[1498]: time="2025-12-12T17:48:13.364760994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ls9sv,Uid:760c3706-9c3f-4bcd-8517-d08a6fc58514,Namespace:kube-system,Attempt:0,}" Dec 12 17:48:13.381900 containerd[1498]: time="2025-12-12T17:48:13.381856474Z" level=info msg="connecting to shim d126f8dd4c20e68a92f3d07641e9a98aeea24537dedfcf7b6b8f4cd68f9ea313" address="unix:///run/containerd/s/4d35c39c5e79dfc402f5d15a29f25c2e309f7690c04b511a1e2401ce1d3e79b6" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:13.415911 systemd[1]: Started cri-containerd-d126f8dd4c20e68a92f3d07641e9a98aeea24537dedfcf7b6b8f4cd68f9ea313.scope - libcontainer container d126f8dd4c20e68a92f3d07641e9a98aeea24537dedfcf7b6b8f4cd68f9ea313. Dec 12 17:48:13.446490 containerd[1498]: time="2025-12-12T17:48:13.446450498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ls9sv,Uid:760c3706-9c3f-4bcd-8517-d08a6fc58514,Namespace:kube-system,Attempt:0,} returns sandbox id \"d126f8dd4c20e68a92f3d07641e9a98aeea24537dedfcf7b6b8f4cd68f9ea313\"" Dec 12 17:48:13.451277 containerd[1498]: time="2025-12-12T17:48:13.450866293Z" level=info msg="CreateContainer within sandbox \"d126f8dd4c20e68a92f3d07641e9a98aeea24537dedfcf7b6b8f4cd68f9ea313\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:48:13.461967 containerd[1498]: time="2025-12-12T17:48:13.461913441Z" level=info msg="Container 704b1f9fc7943ddd81cbf3eb22803c315d771759d22e1483a42945ae87233d2e: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:48:13.469064 containerd[1498]: time="2025-12-12T17:48:13.469016810Z" level=info msg="CreateContainer within sandbox \"d126f8dd4c20e68a92f3d07641e9a98aeea24537dedfcf7b6b8f4cd68f9ea313\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"704b1f9fc7943ddd81cbf3eb22803c315d771759d22e1483a42945ae87233d2e\"" Dec 12 17:48:13.469802 containerd[1498]: time="2025-12-12T17:48:13.469763996Z" level=info msg="StartContainer for \"704b1f9fc7943ddd81cbf3eb22803c315d771759d22e1483a42945ae87233d2e\"" Dec 12 17:48:13.473564 containerd[1498]: time="2025-12-12T17:48:13.473513967Z" level=info msg="connecting to shim 704b1f9fc7943ddd81cbf3eb22803c315d771759d22e1483a42945ae87233d2e" address="unix:///run/containerd/s/4d35c39c5e79dfc402f5d15a29f25c2e309f7690c04b511a1e2401ce1d3e79b6" protocol=ttrpc version=3 Dec 12 17:48:13.497913 systemd[1]: Started cri-containerd-704b1f9fc7943ddd81cbf3eb22803c315d771759d22e1483a42945ae87233d2e.scope - libcontainer container 704b1f9fc7943ddd81cbf3eb22803c315d771759d22e1483a42945ae87233d2e. Dec 12 17:48:13.561494 systemd[1]: Created slice kubepods-besteffort-podae4d98fc_08f6_410f_8548_7d527024a8dd.slice - libcontainer container kubepods-besteffort-podae4d98fc_08f6_410f_8548_7d527024a8dd.slice. Dec 12 17:48:13.591513 containerd[1498]: time="2025-12-12T17:48:13.591478743Z" level=info msg="StartContainer for \"704b1f9fc7943ddd81cbf3eb22803c315d771759d22e1483a42945ae87233d2e\" returns successfully" Dec 12 17:48:13.645997 kubelet[2647]: I1212 17:48:13.645888 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ae4d98fc-08f6-410f-8548-7d527024a8dd-var-lib-calico\") pod \"tigera-operator-7dcd859c48-7zkjj\" (UID: \"ae4d98fc-08f6-410f-8548-7d527024a8dd\") " pod="tigera-operator/tigera-operator-7dcd859c48-7zkjj" Dec 12 17:48:13.645997 kubelet[2647]: I1212 17:48:13.645933 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mtvg\" (UniqueName: \"kubernetes.io/projected/ae4d98fc-08f6-410f-8548-7d527024a8dd-kube-api-access-5mtvg\") pod \"tigera-operator-7dcd859c48-7zkjj\" (UID: \"ae4d98fc-08f6-410f-8548-7d527024a8dd\") " pod="tigera-operator/tigera-operator-7dcd859c48-7zkjj" Dec 12 17:48:13.867615 containerd[1498]: time="2025-12-12T17:48:13.867568104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7zkjj,Uid:ae4d98fc-08f6-410f-8548-7d527024a8dd,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:48:13.883879 containerd[1498]: time="2025-12-12T17:48:13.883835714Z" level=info msg="connecting to shim 0f42b4a37d3cfe145867d98dc3f530edef7b68de0f4a58c14393b4eadf881c73" address="unix:///run/containerd/s/bd920e7f28df18ae488e5c55d8683e097b64e0bf9c131c11b1f82e97d44644a5" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:13.908104 systemd[1]: Started cri-containerd-0f42b4a37d3cfe145867d98dc3f530edef7b68de0f4a58c14393b4eadf881c73.scope - libcontainer container 0f42b4a37d3cfe145867d98dc3f530edef7b68de0f4a58c14393b4eadf881c73. Dec 12 17:48:13.946626 containerd[1498]: time="2025-12-12T17:48:13.946587435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-7zkjj,Uid:ae4d98fc-08f6-410f-8548-7d527024a8dd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0f42b4a37d3cfe145867d98dc3f530edef7b68de0f4a58c14393b4eadf881c73\"" Dec 12 17:48:13.948361 containerd[1498]: time="2025-12-12T17:48:13.948328896Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:48:15.210985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2081013964.mount: Deactivated successfully. Dec 12 17:48:15.511797 containerd[1498]: time="2025-12-12T17:48:15.511099730Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:15.512466 containerd[1498]: time="2025-12-12T17:48:15.512410052Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 12 17:48:15.513433 containerd[1498]: time="2025-12-12T17:48:15.513398363Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:15.515604 containerd[1498]: time="2025-12-12T17:48:15.515573471Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:15.516911 containerd[1498]: time="2025-12-12T17:48:15.516283213Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.567863634s" Dec 12 17:48:15.516911 containerd[1498]: time="2025-12-12T17:48:15.516322655Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:48:15.520559 containerd[1498]: time="2025-12-12T17:48:15.520520707Z" level=info msg="CreateContainer within sandbox \"0f42b4a37d3cfe145867d98dc3f530edef7b68de0f4a58c14393b4eadf881c73\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:48:15.528193 containerd[1498]: time="2025-12-12T17:48:15.528149067Z" level=info msg="Container 79f888508f6d5fdc90f70da47e89c42bcd99bfb933530516a9f7dd8eb8506e8c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:48:15.538607 containerd[1498]: time="2025-12-12T17:48:15.538550514Z" level=info msg="CreateContainer within sandbox \"0f42b4a37d3cfe145867d98dc3f530edef7b68de0f4a58c14393b4eadf881c73\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"79f888508f6d5fdc90f70da47e89c42bcd99bfb933530516a9f7dd8eb8506e8c\"" Dec 12 17:48:15.540078 containerd[1498]: time="2025-12-12T17:48:15.540039161Z" level=info msg="StartContainer for \"79f888508f6d5fdc90f70da47e89c42bcd99bfb933530516a9f7dd8eb8506e8c\"" Dec 12 17:48:15.541353 containerd[1498]: time="2025-12-12T17:48:15.541322082Z" level=info msg="connecting to shim 79f888508f6d5fdc90f70da47e89c42bcd99bfb933530516a9f7dd8eb8506e8c" address="unix:///run/containerd/s/bd920e7f28df18ae488e5c55d8683e097b64e0bf9c131c11b1f82e97d44644a5" protocol=ttrpc version=3 Dec 12 17:48:15.565946 systemd[1]: Started cri-containerd-79f888508f6d5fdc90f70da47e89c42bcd99bfb933530516a9f7dd8eb8506e8c.scope - libcontainer container 79f888508f6d5fdc90f70da47e89c42bcd99bfb933530516a9f7dd8eb8506e8c. Dec 12 17:48:15.597571 containerd[1498]: time="2025-12-12T17:48:15.597528691Z" level=info msg="StartContainer for \"79f888508f6d5fdc90f70da47e89c42bcd99bfb933530516a9f7dd8eb8506e8c\" returns successfully" Dec 12 17:48:16.359366 kubelet[2647]: I1212 17:48:16.359290 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ls9sv" podStartSLOduration=4.359271366 podStartE2EDuration="4.359271366s" podCreationTimestamp="2025-12-12 17:48:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:48:14.355558598 +0000 UTC m=+8.141140772" watchObservedRunningTime="2025-12-12 17:48:16.359271366 +0000 UTC m=+10.144853580" Dec 12 17:48:16.359836 kubelet[2647]: I1212 17:48:16.359389 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-7zkjj" podStartSLOduration=1.789957163 podStartE2EDuration="3.359384929s" podCreationTimestamp="2025-12-12 17:48:13 +0000 UTC" firstStartedPulling="2025-12-12 17:48:13.947937122 +0000 UTC m=+7.733519296" lastFinishedPulling="2025-12-12 17:48:15.517364848 +0000 UTC m=+9.302947062" observedRunningTime="2025-12-12 17:48:16.359374049 +0000 UTC m=+10.144956263" watchObservedRunningTime="2025-12-12 17:48:16.359384929 +0000 UTC m=+10.144967143" Dec 12 17:48:19.152824 sudo[1703]: pam_unix(sudo:session): session closed for user root Dec 12 17:48:19.154750 sshd[1702]: Connection closed by 10.0.0.1 port 43636 Dec 12 17:48:19.154962 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Dec 12 17:48:19.159053 systemd[1]: sshd@6-10.0.0.131:22-10.0.0.1:43636.service: Deactivated successfully. Dec 12 17:48:19.161081 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:48:19.161264 systemd[1]: session-7.scope: Consumed 8.241s CPU time, 221.3M memory peak. Dec 12 17:48:19.163017 systemd-logind[1476]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:48:19.164845 systemd-logind[1476]: Removed session 7. Dec 12 17:48:20.468799 update_engine[1482]: I20251212 17:48:20.468739 1482 update_attempter.cc:509] Updating boot flags... Dec 12 17:48:27.478466 systemd[1]: Created slice kubepods-besteffort-podf099607a_bfb5_4d3f_8ddd_959d866cf257.slice - libcontainer container kubepods-besteffort-podf099607a_bfb5_4d3f_8ddd_959d866cf257.slice. Dec 12 17:48:27.533674 kubelet[2647]: I1212 17:48:27.533603 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f099607a-bfb5-4d3f-8ddd-959d866cf257-typha-certs\") pod \"calico-typha-57d9688f55-vm2tn\" (UID: \"f099607a-bfb5-4d3f-8ddd-959d866cf257\") " pod="calico-system/calico-typha-57d9688f55-vm2tn" Dec 12 17:48:27.533674 kubelet[2647]: I1212 17:48:27.533652 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f099607a-bfb5-4d3f-8ddd-959d866cf257-tigera-ca-bundle\") pod \"calico-typha-57d9688f55-vm2tn\" (UID: \"f099607a-bfb5-4d3f-8ddd-959d866cf257\") " pod="calico-system/calico-typha-57d9688f55-vm2tn" Dec 12 17:48:27.533674 kubelet[2647]: I1212 17:48:27.533673 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g2pp\" (UniqueName: \"kubernetes.io/projected/f099607a-bfb5-4d3f-8ddd-959d866cf257-kube-api-access-6g2pp\") pod \"calico-typha-57d9688f55-vm2tn\" (UID: \"f099607a-bfb5-4d3f-8ddd-959d866cf257\") " pod="calico-system/calico-typha-57d9688f55-vm2tn" Dec 12 17:48:27.707736 systemd[1]: Created slice kubepods-besteffort-pod7ca05128_e867_46fe_b16a_ddd83f27e928.slice - libcontainer container kubepods-besteffort-pod7ca05128_e867_46fe_b16a_ddd83f27e928.slice. Dec 12 17:48:27.736120 kubelet[2647]: I1212 17:48:27.735684 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7ca05128-e867-46fe-b16a-ddd83f27e928-cni-bin-dir\") pod \"calico-node-dhjrk\" (UID: \"7ca05128-e867-46fe-b16a-ddd83f27e928\") " pod="calico-system/calico-node-dhjrk" Dec 12 17:48:27.736120 kubelet[2647]: I1212 17:48:27.735735 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7ca05128-e867-46fe-b16a-ddd83f27e928-cni-log-dir\") pod \"calico-node-dhjrk\" (UID: \"7ca05128-e867-46fe-b16a-ddd83f27e928\") " pod="calico-system/calico-node-dhjrk" Dec 12 17:48:27.736120 kubelet[2647]: I1212 17:48:27.735751 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7ca05128-e867-46fe-b16a-ddd83f27e928-cni-net-dir\") pod \"calico-node-dhjrk\" (UID: \"7ca05128-e867-46fe-b16a-ddd83f27e928\") " pod="calico-system/calico-node-dhjrk" Dec 12 17:48:27.736120 kubelet[2647]: I1212 17:48:27.735770 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ca05128-e867-46fe-b16a-ddd83f27e928-var-lib-calico\") pod \"calico-node-dhjrk\" (UID: \"7ca05128-e867-46fe-b16a-ddd83f27e928\") " pod="calico-system/calico-node-dhjrk" Dec 12 17:48:27.736120 kubelet[2647]: I1212 17:48:27.735818 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7ca05128-e867-46fe-b16a-ddd83f27e928-var-run-calico\") pod \"calico-node-dhjrk\" (UID: \"7ca05128-e867-46fe-b16a-ddd83f27e928\") " pod="calico-system/calico-node-dhjrk" Dec 12 17:48:27.736321 kubelet[2647]: I1212 17:48:27.735837 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7ca05128-e867-46fe-b16a-ddd83f27e928-policysync\") pod \"calico-node-dhjrk\" (UID: \"7ca05128-e867-46fe-b16a-ddd83f27e928\") " pod="calico-system/calico-node-dhjrk" Dec 12 17:48:27.736321 kubelet[2647]: I1212 17:48:27.735852 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ca05128-e867-46fe-b16a-ddd83f27e928-tigera-ca-bundle\") pod \"calico-node-dhjrk\" (UID: \"7ca05128-e867-46fe-b16a-ddd83f27e928\") " pod="calico-system/calico-node-dhjrk" Dec 12 17:48:27.736321 kubelet[2647]: I1212 17:48:27.735866 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvttw\" (UniqueName: \"kubernetes.io/projected/7ca05128-e867-46fe-b16a-ddd83f27e928-kube-api-access-tvttw\") pod \"calico-node-dhjrk\" (UID: \"7ca05128-e867-46fe-b16a-ddd83f27e928\") " pod="calico-system/calico-node-dhjrk" Dec 12 17:48:27.736321 kubelet[2647]: I1212 17:48:27.735884 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ca05128-e867-46fe-b16a-ddd83f27e928-xtables-lock\") pod \"calico-node-dhjrk\" (UID: \"7ca05128-e867-46fe-b16a-ddd83f27e928\") " pod="calico-system/calico-node-dhjrk" Dec 12 17:48:27.736321 kubelet[2647]: I1212 17:48:27.735904 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ca05128-e867-46fe-b16a-ddd83f27e928-lib-modules\") pod \"calico-node-dhjrk\" (UID: \"7ca05128-e867-46fe-b16a-ddd83f27e928\") " pod="calico-system/calico-node-dhjrk" Dec 12 17:48:27.736418 kubelet[2647]: I1212 17:48:27.735918 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7ca05128-e867-46fe-b16a-ddd83f27e928-node-certs\") pod \"calico-node-dhjrk\" (UID: \"7ca05128-e867-46fe-b16a-ddd83f27e928\") " pod="calico-system/calico-node-dhjrk" Dec 12 17:48:27.736418 kubelet[2647]: I1212 17:48:27.735934 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7ca05128-e867-46fe-b16a-ddd83f27e928-flexvol-driver-host\") pod \"calico-node-dhjrk\" (UID: \"7ca05128-e867-46fe-b16a-ddd83f27e928\") " pod="calico-system/calico-node-dhjrk" Dec 12 17:48:27.783099 containerd[1498]: time="2025-12-12T17:48:27.783048844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57d9688f55-vm2tn,Uid:f099607a-bfb5-4d3f-8ddd-959d866cf257,Namespace:calico-system,Attempt:0,}" Dec 12 17:48:27.815151 containerd[1498]: time="2025-12-12T17:48:27.815093884Z" level=info msg="connecting to shim f0c51c5c61a548ba927effbe3895d19ace95d099ca1da935f5a36675a40eab12" address="unix:///run/containerd/s/3453a583db53f029b6a20a3c12f555657695ac43bfdb73132aefa166cd431034" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:27.878009 systemd[1]: Started cri-containerd-f0c51c5c61a548ba927effbe3895d19ace95d099ca1da935f5a36675a40eab12.scope - libcontainer container f0c51c5c61a548ba927effbe3895d19ace95d099ca1da935f5a36675a40eab12. Dec 12 17:48:27.904448 kubelet[2647]: E1212 17:48:27.904400 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjknm" podUID="d3e06765-0219-4a57-abc4-db29c031f701" Dec 12 17:48:27.922421 kubelet[2647]: E1212 17:48:27.922065 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.922421 kubelet[2647]: W1212 17:48:27.922418 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.924184 kubelet[2647]: E1212 17:48:27.924154 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.924592 kubelet[2647]: E1212 17:48:27.924556 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.930524 kubelet[2647]: W1212 17:48:27.924572 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.930524 kubelet[2647]: E1212 17:48:27.930523 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.930825 kubelet[2647]: E1212 17:48:27.930807 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.930825 kubelet[2647]: W1212 17:48:27.930821 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.930898 kubelet[2647]: E1212 17:48:27.930834 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.931514 kubelet[2647]: E1212 17:48:27.931481 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.931514 kubelet[2647]: W1212 17:48:27.931497 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.931514 kubelet[2647]: E1212 17:48:27.931511 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.932295 kubelet[2647]: E1212 17:48:27.932271 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.932295 kubelet[2647]: W1212 17:48:27.932289 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.932387 kubelet[2647]: E1212 17:48:27.932303 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.940086 kubelet[2647]: E1212 17:48:27.940055 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.940086 kubelet[2647]: W1212 17:48:27.940080 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.940189 kubelet[2647]: E1212 17:48:27.940097 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.940869 kubelet[2647]: E1212 17:48:27.940826 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.940869 kubelet[2647]: W1212 17:48:27.940845 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.940869 kubelet[2647]: E1212 17:48:27.940858 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.941050 kubelet[2647]: E1212 17:48:27.941020 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.941050 kubelet[2647]: W1212 17:48:27.941032 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.941050 kubelet[2647]: E1212 17:48:27.941041 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.941497 kubelet[2647]: E1212 17:48:27.941478 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.941497 kubelet[2647]: W1212 17:48:27.941491 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.941574 kubelet[2647]: E1212 17:48:27.941504 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.941691 kubelet[2647]: E1212 17:48:27.941635 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.941691 kubelet[2647]: W1212 17:48:27.941651 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.941691 kubelet[2647]: E1212 17:48:27.941660 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.941885 kubelet[2647]: E1212 17:48:27.941792 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.941885 kubelet[2647]: W1212 17:48:27.941800 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.941885 kubelet[2647]: E1212 17:48:27.941808 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.942633 kubelet[2647]: E1212 17:48:27.942616 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.942665 kubelet[2647]: W1212 17:48:27.942634 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.942665 kubelet[2647]: E1212 17:48:27.942647 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.942911 kubelet[2647]: E1212 17:48:27.942899 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.942956 kubelet[2647]: W1212 17:48:27.942911 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.942956 kubelet[2647]: E1212 17:48:27.942921 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.943061 kubelet[2647]: E1212 17:48:27.943051 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.943061 kubelet[2647]: W1212 17:48:27.943061 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.943119 kubelet[2647]: E1212 17:48:27.943076 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.943265 kubelet[2647]: E1212 17:48:27.943255 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.943265 kubelet[2647]: W1212 17:48:27.943265 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.943323 kubelet[2647]: E1212 17:48:27.943274 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.943535 kubelet[2647]: E1212 17:48:27.943525 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.943535 kubelet[2647]: W1212 17:48:27.943534 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.943584 kubelet[2647]: E1212 17:48:27.943542 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.943732 kubelet[2647]: E1212 17:48:27.943720 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.943732 kubelet[2647]: W1212 17:48:27.943731 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.943792 kubelet[2647]: E1212 17:48:27.943740 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.943959 kubelet[2647]: E1212 17:48:27.943947 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.943959 kubelet[2647]: W1212 17:48:27.943958 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.944139 kubelet[2647]: E1212 17:48:27.943967 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.944139 kubelet[2647]: E1212 17:48:27.944111 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.944139 kubelet[2647]: W1212 17:48:27.944119 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.944139 kubelet[2647]: E1212 17:48:27.944128 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.944337 kubelet[2647]: E1212 17:48:27.944247 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.944337 kubelet[2647]: W1212 17:48:27.944254 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.944337 kubelet[2647]: E1212 17:48:27.944262 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.944670 kubelet[2647]: E1212 17:48:27.944655 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.944670 kubelet[2647]: W1212 17:48:27.944670 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.944791 kubelet[2647]: E1212 17:48:27.944681 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.944850 kubelet[2647]: I1212 17:48:27.944825 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d3e06765-0219-4a57-abc4-db29c031f701-registration-dir\") pod \"csi-node-driver-pjknm\" (UID: \"d3e06765-0219-4a57-abc4-db29c031f701\") " pod="calico-system/csi-node-driver-pjknm" Dec 12 17:48:27.945447 kubelet[2647]: E1212 17:48:27.945433 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.945447 kubelet[2647]: W1212 17:48:27.945445 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.945611 kubelet[2647]: E1212 17:48:27.945455 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.945726 kubelet[2647]: E1212 17:48:27.945698 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.945726 kubelet[2647]: W1212 17:48:27.945720 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.945791 kubelet[2647]: E1212 17:48:27.945729 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.945915 kubelet[2647]: E1212 17:48:27.945887 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.945915 kubelet[2647]: W1212 17:48:27.945900 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.945915 kubelet[2647]: E1212 17:48:27.945909 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.946111 kubelet[2647]: I1212 17:48:27.945938 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d3e06765-0219-4a57-abc4-db29c031f701-socket-dir\") pod \"csi-node-driver-pjknm\" (UID: \"d3e06765-0219-4a57-abc4-db29c031f701\") " pod="calico-system/csi-node-driver-pjknm" Dec 12 17:48:27.946111 kubelet[2647]: E1212 17:48:27.946104 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.946155 kubelet[2647]: W1212 17:48:27.946113 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.946155 kubelet[2647]: E1212 17:48:27.946123 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.946197 kubelet[2647]: I1212 17:48:27.946160 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d3e06765-0219-4a57-abc4-db29c031f701-varrun\") pod \"csi-node-driver-pjknm\" (UID: \"d3e06765-0219-4a57-abc4-db29c031f701\") " pod="calico-system/csi-node-driver-pjknm" Dec 12 17:48:27.946396 containerd[1498]: time="2025-12-12T17:48:27.946358454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57d9688f55-vm2tn,Uid:f099607a-bfb5-4d3f-8ddd-959d866cf257,Namespace:calico-system,Attempt:0,} returns sandbox id \"f0c51c5c61a548ba927effbe3895d19ace95d099ca1da935f5a36675a40eab12\"" Dec 12 17:48:27.946688 kubelet[2647]: E1212 17:48:27.946672 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.946688 kubelet[2647]: W1212 17:48:27.946687 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.946796 kubelet[2647]: E1212 17:48:27.946699 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.947166 kubelet[2647]: I1212 17:48:27.947150 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xb4t\" (UniqueName: \"kubernetes.io/projected/d3e06765-0219-4a57-abc4-db29c031f701-kube-api-access-2xb4t\") pod \"csi-node-driver-pjknm\" (UID: \"d3e06765-0219-4a57-abc4-db29c031f701\") " pod="calico-system/csi-node-driver-pjknm" Dec 12 17:48:27.947511 kubelet[2647]: E1212 17:48:27.947492 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.947511 kubelet[2647]: W1212 17:48:27.947509 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.947574 kubelet[2647]: E1212 17:48:27.947521 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.947650 kubelet[2647]: I1212 17:48:27.947631 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d3e06765-0219-4a57-abc4-db29c031f701-kubelet-dir\") pod \"csi-node-driver-pjknm\" (UID: \"d3e06765-0219-4a57-abc4-db29c031f701\") " pod="calico-system/csi-node-driver-pjknm" Dec 12 17:48:27.948208 kubelet[2647]: E1212 17:48:27.948193 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.948208 kubelet[2647]: W1212 17:48:27.948207 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.948282 kubelet[2647]: E1212 17:48:27.948217 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.948871 kubelet[2647]: E1212 17:48:27.948844 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.948871 kubelet[2647]: W1212 17:48:27.948859 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.948871 kubelet[2647]: E1212 17:48:27.948870 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.949061 kubelet[2647]: E1212 17:48:27.949045 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.949061 kubelet[2647]: W1212 17:48:27.949058 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.949137 kubelet[2647]: E1212 17:48:27.949075 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.950145 kubelet[2647]: E1212 17:48:27.950127 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.950145 kubelet[2647]: W1212 17:48:27.950144 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.950231 kubelet[2647]: E1212 17:48:27.950158 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.950695 kubelet[2647]: E1212 17:48:27.950681 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.950695 kubelet[2647]: W1212 17:48:27.950694 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.950791 kubelet[2647]: E1212 17:48:27.950705 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.951523 kubelet[2647]: E1212 17:48:27.951488 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.951523 kubelet[2647]: W1212 17:48:27.951508 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.951669 kubelet[2647]: E1212 17:48:27.951541 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.952020 kubelet[2647]: E1212 17:48:27.952005 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.952020 kubelet[2647]: W1212 17:48:27.952019 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.952106 kubelet[2647]: E1212 17:48:27.952030 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.952419 kubelet[2647]: E1212 17:48:27.952404 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:27.952419 kubelet[2647]: W1212 17:48:27.952418 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:27.952484 kubelet[2647]: E1212 17:48:27.952431 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:27.953909 containerd[1498]: time="2025-12-12T17:48:27.953881345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:48:28.011608 containerd[1498]: time="2025-12-12T17:48:28.011507503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dhjrk,Uid:7ca05128-e867-46fe-b16a-ddd83f27e928,Namespace:calico-system,Attempt:0,}" Dec 12 17:48:28.032941 containerd[1498]: time="2025-12-12T17:48:28.032898901Z" level=info msg="connecting to shim ae420d9e1e94b74ff73efa4ea92f878b42d233c5b2b375a4a290ca0b4cb69448" address="unix:///run/containerd/s/52a9deabbeae5b791726adcc32ea1fe5e709d9d715be91a37cb2343dc0b81804" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:28.048727 kubelet[2647]: E1212 17:48:28.048597 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.048727 kubelet[2647]: W1212 17:48:28.048621 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.048727 kubelet[2647]: E1212 17:48:28.048640 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.049112 kubelet[2647]: E1212 17:48:28.049039 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.049112 kubelet[2647]: W1212 17:48:28.049051 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.049112 kubelet[2647]: E1212 17:48:28.049062 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.049305 kubelet[2647]: E1212 17:48:28.049286 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.049340 kubelet[2647]: W1212 17:48:28.049305 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.049340 kubelet[2647]: E1212 17:48:28.049319 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.049531 kubelet[2647]: E1212 17:48:28.049518 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.049531 kubelet[2647]: W1212 17:48:28.049528 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.049612 kubelet[2647]: E1212 17:48:28.049538 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.049679 kubelet[2647]: E1212 17:48:28.049669 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.049679 kubelet[2647]: W1212 17:48:28.049678 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.049746 kubelet[2647]: E1212 17:48:28.049687 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.049914 kubelet[2647]: E1212 17:48:28.049897 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.049957 kubelet[2647]: W1212 17:48:28.049915 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.049957 kubelet[2647]: E1212 17:48:28.049928 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.050140 kubelet[2647]: E1212 17:48:28.050129 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.050140 kubelet[2647]: W1212 17:48:28.050141 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.050198 kubelet[2647]: E1212 17:48:28.050150 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.050346 kubelet[2647]: E1212 17:48:28.050334 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.050387 kubelet[2647]: W1212 17:48:28.050347 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.050387 kubelet[2647]: E1212 17:48:28.050359 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.050721 kubelet[2647]: E1212 17:48:28.050694 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.050766 kubelet[2647]: W1212 17:48:28.050727 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.050766 kubelet[2647]: E1212 17:48:28.050740 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.050899 kubelet[2647]: E1212 17:48:28.050887 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.050937 kubelet[2647]: W1212 17:48:28.050899 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.050937 kubelet[2647]: E1212 17:48:28.050909 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.051104 kubelet[2647]: E1212 17:48:28.051093 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.051104 kubelet[2647]: W1212 17:48:28.051104 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.051195 kubelet[2647]: E1212 17:48:28.051113 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.051303 kubelet[2647]: E1212 17:48:28.051292 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.051339 kubelet[2647]: W1212 17:48:28.051304 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.051339 kubelet[2647]: E1212 17:48:28.051315 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.051495 kubelet[2647]: E1212 17:48:28.051485 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.051525 kubelet[2647]: W1212 17:48:28.051495 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.051525 kubelet[2647]: E1212 17:48:28.051504 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.051640 kubelet[2647]: E1212 17:48:28.051631 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.051640 kubelet[2647]: W1212 17:48:28.051640 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.051690 kubelet[2647]: E1212 17:48:28.051650 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.051825 kubelet[2647]: E1212 17:48:28.051813 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.051825 kubelet[2647]: W1212 17:48:28.051825 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.051869 kubelet[2647]: E1212 17:48:28.051834 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.051970 kubelet[2647]: E1212 17:48:28.051961 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.051970 kubelet[2647]: W1212 17:48:28.051970 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.052021 kubelet[2647]: E1212 17:48:28.051979 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.052103 kubelet[2647]: E1212 17:48:28.052090 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.052103 kubelet[2647]: W1212 17:48:28.052100 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.052103 kubelet[2647]: E1212 17:48:28.052108 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.052375 kubelet[2647]: E1212 17:48:28.052361 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.052434 kubelet[2647]: W1212 17:48:28.052422 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.052483 kubelet[2647]: E1212 17:48:28.052473 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.052681 kubelet[2647]: E1212 17:48:28.052669 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.052788 kubelet[2647]: W1212 17:48:28.052760 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.052788 kubelet[2647]: E1212 17:48:28.052777 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.053060 kubelet[2647]: E1212 17:48:28.053012 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.053060 kubelet[2647]: W1212 17:48:28.053023 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.053060 kubelet[2647]: E1212 17:48:28.053034 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.053426 kubelet[2647]: E1212 17:48:28.053300 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.053426 kubelet[2647]: W1212 17:48:28.053313 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.053426 kubelet[2647]: E1212 17:48:28.053324 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.053587 kubelet[2647]: E1212 17:48:28.053574 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.053637 kubelet[2647]: W1212 17:48:28.053626 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.053685 kubelet[2647]: E1212 17:48:28.053674 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.054030 kubelet[2647]: E1212 17:48:28.053921 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.054030 kubelet[2647]: W1212 17:48:28.053933 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.054030 kubelet[2647]: E1212 17:48:28.053943 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.054202 kubelet[2647]: E1212 17:48:28.054179 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.054202 kubelet[2647]: W1212 17:48:28.054193 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.054202 kubelet[2647]: E1212 17:48:28.054204 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.054364 kubelet[2647]: E1212 17:48:28.054349 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.054364 kubelet[2647]: W1212 17:48:28.054359 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.054417 kubelet[2647]: E1212 17:48:28.054368 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.071205 kubelet[2647]: E1212 17:48:28.071137 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:28.071205 kubelet[2647]: W1212 17:48:28.071155 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:28.071205 kubelet[2647]: E1212 17:48:28.071173 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:28.071893 systemd[1]: Started cri-containerd-ae420d9e1e94b74ff73efa4ea92f878b42d233c5b2b375a4a290ca0b4cb69448.scope - libcontainer container ae420d9e1e94b74ff73efa4ea92f878b42d233c5b2b375a4a290ca0b4cb69448. Dec 12 17:48:28.099848 containerd[1498]: time="2025-12-12T17:48:28.099812338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dhjrk,Uid:7ca05128-e867-46fe-b16a-ddd83f27e928,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae420d9e1e94b74ff73efa4ea92f878b42d233c5b2b375a4a290ca0b4cb69448\"" Dec 12 17:48:28.935031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189350652.mount: Deactivated successfully. Dec 12 17:48:29.309075 kubelet[2647]: E1212 17:48:29.307973 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjknm" podUID="d3e06765-0219-4a57-abc4-db29c031f701" Dec 12 17:48:29.512423 containerd[1498]: time="2025-12-12T17:48:29.512373807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:29.513292 containerd[1498]: time="2025-12-12T17:48:29.513082658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Dec 12 17:48:29.514141 containerd[1498]: time="2025-12-12T17:48:29.514107075Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:29.516178 containerd[1498]: time="2025-12-12T17:48:29.516141307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:29.516901 containerd[1498]: time="2025-12-12T17:48:29.516874639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.562956373s" Dec 12 17:48:29.517035 containerd[1498]: time="2025-12-12T17:48:29.517017841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:48:29.531074 containerd[1498]: time="2025-12-12T17:48:29.531034306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:48:29.544244 containerd[1498]: time="2025-12-12T17:48:29.544201436Z" level=info msg="CreateContainer within sandbox \"f0c51c5c61a548ba927effbe3895d19ace95d099ca1da935f5a36675a40eab12\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:48:29.556003 containerd[1498]: time="2025-12-12T17:48:29.555119651Z" level=info msg="Container b75a97b276854fb3c438c090b895a09fff4278ac621658a6d9799594baaaccad: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:48:29.561118 containerd[1498]: time="2025-12-12T17:48:29.561021345Z" level=info msg="CreateContainer within sandbox \"f0c51c5c61a548ba927effbe3895d19ace95d099ca1da935f5a36675a40eab12\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b75a97b276854fb3c438c090b895a09fff4278ac621658a6d9799594baaaccad\"" Dec 12 17:48:29.562033 containerd[1498]: time="2025-12-12T17:48:29.561999121Z" level=info msg="StartContainer for \"b75a97b276854fb3c438c090b895a09fff4278ac621658a6d9799594baaaccad\"" Dec 12 17:48:29.563108 containerd[1498]: time="2025-12-12T17:48:29.563066178Z" level=info msg="connecting to shim b75a97b276854fb3c438c090b895a09fff4278ac621658a6d9799594baaaccad" address="unix:///run/containerd/s/3453a583db53f029b6a20a3c12f555657695ac43bfdb73132aefa166cd431034" protocol=ttrpc version=3 Dec 12 17:48:29.583904 systemd[1]: Started cri-containerd-b75a97b276854fb3c438c090b895a09fff4278ac621658a6d9799594baaaccad.scope - libcontainer container b75a97b276854fb3c438c090b895a09fff4278ac621658a6d9799594baaaccad. Dec 12 17:48:29.619549 containerd[1498]: time="2025-12-12T17:48:29.619502721Z" level=info msg="StartContainer for \"b75a97b276854fb3c438c090b895a09fff4278ac621658a6d9799594baaaccad\" returns successfully" Dec 12 17:48:30.427746 containerd[1498]: time="2025-12-12T17:48:30.427683167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:30.428341 containerd[1498]: time="2025-12-12T17:48:30.428296417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Dec 12 17:48:30.429033 containerd[1498]: time="2025-12-12T17:48:30.429011188Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:30.430961 containerd[1498]: time="2025-12-12T17:48:30.430920217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:30.431722 containerd[1498]: time="2025-12-12T17:48:30.431673109Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 900.600163ms" Dec 12 17:48:30.431765 containerd[1498]: time="2025-12-12T17:48:30.431738230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:48:30.436579 containerd[1498]: time="2025-12-12T17:48:30.436150577Z" level=info msg="CreateContainer within sandbox \"ae420d9e1e94b74ff73efa4ea92f878b42d233c5b2b375a4a290ca0b4cb69448\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:48:30.444547 containerd[1498]: time="2025-12-12T17:48:30.444352503Z" level=info msg="Container 30c8696f09ab55b4cecfe01ae3138738e35c6a3daa9ca7fbe0515f3f5ce18c23: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:48:30.453177 containerd[1498]: time="2025-12-12T17:48:30.453140078Z" level=info msg="CreateContainer within sandbox \"ae420d9e1e94b74ff73efa4ea92f878b42d233c5b2b375a4a290ca0b4cb69448\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"30c8696f09ab55b4cecfe01ae3138738e35c6a3daa9ca7fbe0515f3f5ce18c23\"" Dec 12 17:48:30.453750 containerd[1498]: time="2025-12-12T17:48:30.453723367Z" level=info msg="StartContainer for \"30c8696f09ab55b4cecfe01ae3138738e35c6a3daa9ca7fbe0515f3f5ce18c23\"" Dec 12 17:48:30.455063 containerd[1498]: time="2025-12-12T17:48:30.455035827Z" level=info msg="connecting to shim 30c8696f09ab55b4cecfe01ae3138738e35c6a3daa9ca7fbe0515f3f5ce18c23" address="unix:///run/containerd/s/52a9deabbeae5b791726adcc32ea1fe5e709d9d715be91a37cb2343dc0b81804" protocol=ttrpc version=3 Dec 12 17:48:30.460460 kubelet[2647]: E1212 17:48:30.460437 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.460812 kubelet[2647]: W1212 17:48:30.460458 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.460812 kubelet[2647]: E1212 17:48:30.460493 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.460812 kubelet[2647]: E1212 17:48:30.460729 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.460812 kubelet[2647]: W1212 17:48:30.460740 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.460812 kubelet[2647]: E1212 17:48:30.460793 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.460991 kubelet[2647]: E1212 17:48:30.460976 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.460991 kubelet[2647]: W1212 17:48:30.460985 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.461031 kubelet[2647]: E1212 17:48:30.460994 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.461217 kubelet[2647]: E1212 17:48:30.461201 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.461217 kubelet[2647]: W1212 17:48:30.461216 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.461272 kubelet[2647]: E1212 17:48:30.461226 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.461416 kubelet[2647]: E1212 17:48:30.461402 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.461416 kubelet[2647]: W1212 17:48:30.461414 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.461463 kubelet[2647]: E1212 17:48:30.461425 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.461592 kubelet[2647]: E1212 17:48:30.461580 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.461592 kubelet[2647]: W1212 17:48:30.461592 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.461647 kubelet[2647]: E1212 17:48:30.461601 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.461793 kubelet[2647]: E1212 17:48:30.461778 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.461839 kubelet[2647]: W1212 17:48:30.461792 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.461839 kubelet[2647]: E1212 17:48:30.461821 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.462032 kubelet[2647]: E1212 17:48:30.462016 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.462032 kubelet[2647]: W1212 17:48:30.462030 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.462090 kubelet[2647]: E1212 17:48:30.462039 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.462263 kubelet[2647]: E1212 17:48:30.462227 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.462296 kubelet[2647]: W1212 17:48:30.462264 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.462296 kubelet[2647]: E1212 17:48:30.462279 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.462451 kubelet[2647]: E1212 17:48:30.462438 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.462481 kubelet[2647]: W1212 17:48:30.462451 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.462481 kubelet[2647]: E1212 17:48:30.462460 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.462594 kubelet[2647]: E1212 17:48:30.462582 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.462622 kubelet[2647]: W1212 17:48:30.462593 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.462622 kubelet[2647]: E1212 17:48:30.462616 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.462846 kubelet[2647]: E1212 17:48:30.462832 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.462846 kubelet[2647]: W1212 17:48:30.462844 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.462924 kubelet[2647]: E1212 17:48:30.462855 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.463041 kubelet[2647]: E1212 17:48:30.463028 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.463072 kubelet[2647]: W1212 17:48:30.463051 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.463072 kubelet[2647]: E1212 17:48:30.463061 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.463242 kubelet[2647]: E1212 17:48:30.463231 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.463271 kubelet[2647]: W1212 17:48:30.463242 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.463271 kubelet[2647]: E1212 17:48:30.463252 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.463441 kubelet[2647]: E1212 17:48:30.463427 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.463477 kubelet[2647]: W1212 17:48:30.463439 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.463477 kubelet[2647]: E1212 17:48:30.463453 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.469780 kubelet[2647]: E1212 17:48:30.469761 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.469780 kubelet[2647]: W1212 17:48:30.469777 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.469886 kubelet[2647]: E1212 17:48:30.469791 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.470104 kubelet[2647]: E1212 17:48:30.469961 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.470104 kubelet[2647]: W1212 17:48:30.469980 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.470104 kubelet[2647]: E1212 17:48:30.469989 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.470396 kubelet[2647]: E1212 17:48:30.470160 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.470396 kubelet[2647]: W1212 17:48:30.470169 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.470396 kubelet[2647]: E1212 17:48:30.470177 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.470810 kubelet[2647]: E1212 17:48:30.470669 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.470810 kubelet[2647]: W1212 17:48:30.470684 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.470810 kubelet[2647]: E1212 17:48:30.470698 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.471196 kubelet[2647]: E1212 17:48:30.471057 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.471196 kubelet[2647]: W1212 17:48:30.471073 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.471196 kubelet[2647]: E1212 17:48:30.471087 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.471458 kubelet[2647]: E1212 17:48:30.471443 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.471530 kubelet[2647]: W1212 17:48:30.471517 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.471705 kubelet[2647]: E1212 17:48:30.471572 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.471930 kubelet[2647]: E1212 17:48:30.471914 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.472230 kubelet[2647]: W1212 17:48:30.472077 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.472230 kubelet[2647]: E1212 17:48:30.472106 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.472381 kubelet[2647]: E1212 17:48:30.472367 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.472439 kubelet[2647]: W1212 17:48:30.472427 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.472493 kubelet[2647]: E1212 17:48:30.472483 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.472836 kubelet[2647]: E1212 17:48:30.472728 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.472836 kubelet[2647]: W1212 17:48:30.472741 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.472836 kubelet[2647]: E1212 17:48:30.472752 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.473061 kubelet[2647]: E1212 17:48:30.473046 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.473260 kubelet[2647]: W1212 17:48:30.473120 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.473260 kubelet[2647]: E1212 17:48:30.473136 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.473616 kubelet[2647]: E1212 17:48:30.473476 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.473616 kubelet[2647]: W1212 17:48:30.473506 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.473616 kubelet[2647]: E1212 17:48:30.473519 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.473817 kubelet[2647]: E1212 17:48:30.473802 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.474026 kubelet[2647]: W1212 17:48:30.473896 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.474026 kubelet[2647]: E1212 17:48:30.473914 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.474270 kubelet[2647]: E1212 17:48:30.474258 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.474349 kubelet[2647]: W1212 17:48:30.474324 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.474501 kubelet[2647]: E1212 17:48:30.474402 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.474624 kubelet[2647]: E1212 17:48:30.474609 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.474854 kubelet[2647]: W1212 17:48:30.474731 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.474854 kubelet[2647]: E1212 17:48:30.474757 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.475072 kubelet[2647]: E1212 17:48:30.475053 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.475155 kubelet[2647]: W1212 17:48:30.475142 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.475218 kubelet[2647]: E1212 17:48:30.475207 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.475545 kubelet[2647]: E1212 17:48:30.475528 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.475852 kubelet[2647]: W1212 17:48:30.475616 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.475852 kubelet[2647]: E1212 17:48:30.475647 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.475935 kubelet[2647]: E1212 17:48:30.475867 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.475935 kubelet[2647]: W1212 17:48:30.475880 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.475935 kubelet[2647]: E1212 17:48:30.475891 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.476138 kubelet[2647]: E1212 17:48:30.476029 2647 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:48:30.476138 kubelet[2647]: W1212 17:48:30.476040 2647 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:48:30.476138 kubelet[2647]: E1212 17:48:30.476049 2647 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:48:30.477866 systemd[1]: Started cri-containerd-30c8696f09ab55b4cecfe01ae3138738e35c6a3daa9ca7fbe0515f3f5ce18c23.scope - libcontainer container 30c8696f09ab55b4cecfe01ae3138738e35c6a3daa9ca7fbe0515f3f5ce18c23. Dec 12 17:48:30.556775 containerd[1498]: time="2025-12-12T17:48:30.556622465Z" level=info msg="StartContainer for \"30c8696f09ab55b4cecfe01ae3138738e35c6a3daa9ca7fbe0515f3f5ce18c23\" returns successfully" Dec 12 17:48:30.564091 systemd[1]: cri-containerd-30c8696f09ab55b4cecfe01ae3138738e35c6a3daa9ca7fbe0515f3f5ce18c23.scope: Deactivated successfully. Dec 12 17:48:30.592512 containerd[1498]: time="2025-12-12T17:48:30.592363653Z" level=info msg="received container exit event container_id:\"30c8696f09ab55b4cecfe01ae3138738e35c6a3daa9ca7fbe0515f3f5ce18c23\" id:\"30c8696f09ab55b4cecfe01ae3138738e35c6a3daa9ca7fbe0515f3f5ce18c23\" pid:3352 exited_at:{seconds:1765561710 nanos:586770847}" Dec 12 17:48:30.634139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30c8696f09ab55b4cecfe01ae3138738e35c6a3daa9ca7fbe0515f3f5ce18c23-rootfs.mount: Deactivated successfully. Dec 12 17:48:31.307524 kubelet[2647]: E1212 17:48:31.307423 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjknm" podUID="d3e06765-0219-4a57-abc4-db29c031f701" Dec 12 17:48:31.386539 kubelet[2647]: I1212 17:48:31.386513 2647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:48:31.387396 containerd[1498]: time="2025-12-12T17:48:31.387278087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:48:31.401529 kubelet[2647]: I1212 17:48:31.401340 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57d9688f55-vm2tn" podStartSLOduration=2.82175849 podStartE2EDuration="4.401325533s" podCreationTimestamp="2025-12-12 17:48:27 +0000 UTC" firstStartedPulling="2025-12-12 17:48:27.95129726 +0000 UTC m=+21.736879474" lastFinishedPulling="2025-12-12 17:48:29.530864303 +0000 UTC m=+23.316446517" observedRunningTime="2025-12-12 17:48:30.393234119 +0000 UTC m=+24.178816333" watchObservedRunningTime="2025-12-12 17:48:31.401325533 +0000 UTC m=+25.186907747" Dec 12 17:48:33.307949 kubelet[2647]: E1212 17:48:33.307881 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pjknm" podUID="d3e06765-0219-4a57-abc4-db29c031f701" Dec 12 17:48:33.974843 containerd[1498]: time="2025-12-12T17:48:33.974784740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:33.975948 containerd[1498]: time="2025-12-12T17:48:33.975787074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 12 17:48:33.976761 containerd[1498]: time="2025-12-12T17:48:33.976702526Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:33.980802 containerd[1498]: time="2025-12-12T17:48:33.980763461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:33.981503 containerd[1498]: time="2025-12-12T17:48:33.981471151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.594153984s" Dec 12 17:48:33.981594 containerd[1498]: time="2025-12-12T17:48:33.981580192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:48:33.988180 containerd[1498]: time="2025-12-12T17:48:33.988142322Z" level=info msg="CreateContainer within sandbox \"ae420d9e1e94b74ff73efa4ea92f878b42d233c5b2b375a4a290ca0b4cb69448\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:48:34.000454 containerd[1498]: time="2025-12-12T17:48:34.000408848Z" level=info msg="Container 1612227366b11e694f6a0804ffeb4ddd5d0428eb2b28635b332228dc8075d983: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:48:34.009663 containerd[1498]: time="2025-12-12T17:48:34.009609610Z" level=info msg="CreateContainer within sandbox \"ae420d9e1e94b74ff73efa4ea92f878b42d233c5b2b375a4a290ca0b4cb69448\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1612227366b11e694f6a0804ffeb4ddd5d0428eb2b28635b332228dc8075d983\"" Dec 12 17:48:34.010426 containerd[1498]: time="2025-12-12T17:48:34.010383140Z" level=info msg="StartContainer for \"1612227366b11e694f6a0804ffeb4ddd5d0428eb2b28635b332228dc8075d983\"" Dec 12 17:48:34.011969 containerd[1498]: time="2025-12-12T17:48:34.011942840Z" level=info msg="connecting to shim 1612227366b11e694f6a0804ffeb4ddd5d0428eb2b28635b332228dc8075d983" address="unix:///run/containerd/s/52a9deabbeae5b791726adcc32ea1fe5e709d9d715be91a37cb2343dc0b81804" protocol=ttrpc version=3 Dec 12 17:48:34.034905 systemd[1]: Started cri-containerd-1612227366b11e694f6a0804ffeb4ddd5d0428eb2b28635b332228dc8075d983.scope - libcontainer container 1612227366b11e694f6a0804ffeb4ddd5d0428eb2b28635b332228dc8075d983. Dec 12 17:48:34.095918 containerd[1498]: time="2025-12-12T17:48:34.095842898Z" level=info msg="StartContainer for \"1612227366b11e694f6a0804ffeb4ddd5d0428eb2b28635b332228dc8075d983\" returns successfully" Dec 12 17:48:34.705025 systemd[1]: cri-containerd-1612227366b11e694f6a0804ffeb4ddd5d0428eb2b28635b332228dc8075d983.scope: Deactivated successfully. Dec 12 17:48:34.705349 systemd[1]: cri-containerd-1612227366b11e694f6a0804ffeb4ddd5d0428eb2b28635b332228dc8075d983.scope: Consumed 448ms CPU time, 176.5M memory peak, 2.4M read from disk, 165.9M written to disk. Dec 12 17:48:34.708891 containerd[1498]: time="2025-12-12T17:48:34.708696919Z" level=info msg="received container exit event container_id:\"1612227366b11e694f6a0804ffeb4ddd5d0428eb2b28635b332228dc8075d983\" id:\"1612227366b11e694f6a0804ffeb4ddd5d0428eb2b28635b332228dc8075d983\" pid:3413 exited_at:{seconds:1765561714 nanos:708337194}" Dec 12 17:48:34.728284 kubelet[2647]: I1212 17:48:34.728251 2647 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:48:34.729679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1612227366b11e694f6a0804ffeb4ddd5d0428eb2b28635b332228dc8075d983-rootfs.mount: Deactivated successfully. Dec 12 17:48:34.792035 systemd[1]: Created slice kubepods-burstable-pod1734759f_91db_4792_a5b1_49dc654f13fd.slice - libcontainer container kubepods-burstable-pod1734759f_91db_4792_a5b1_49dc654f13fd.slice. Dec 12 17:48:34.802765 systemd[1]: Created slice kubepods-burstable-pode923402f_6f38_4464_85a3_ba4ab295e18c.slice - libcontainer container kubepods-burstable-pode923402f_6f38_4464_85a3_ba4ab295e18c.slice. Dec 12 17:48:34.806889 kubelet[2647]: I1212 17:48:34.806432 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1734759f-91db-4792-a5b1-49dc654f13fd-config-volume\") pod \"coredns-674b8bbfcf-c6mwg\" (UID: \"1734759f-91db-4792-a5b1-49dc654f13fd\") " pod="kube-system/coredns-674b8bbfcf-c6mwg" Dec 12 17:48:34.807155 kubelet[2647]: I1212 17:48:34.807098 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crn88\" (UniqueName: \"kubernetes.io/projected/e923402f-6f38-4464-85a3-ba4ab295e18c-kube-api-access-crn88\") pod \"coredns-674b8bbfcf-5mf4q\" (UID: \"e923402f-6f38-4464-85a3-ba4ab295e18c\") " pod="kube-system/coredns-674b8bbfcf-5mf4q" Dec 12 17:48:34.807206 kubelet[2647]: I1212 17:48:34.807168 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e923402f-6f38-4464-85a3-ba4ab295e18c-config-volume\") pod \"coredns-674b8bbfcf-5mf4q\" (UID: \"e923402f-6f38-4464-85a3-ba4ab295e18c\") " pod="kube-system/coredns-674b8bbfcf-5mf4q" Dec 12 17:48:34.807206 kubelet[2647]: I1212 17:48:34.807194 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zl96\" (UniqueName: \"kubernetes.io/projected/1734759f-91db-4792-a5b1-49dc654f13fd-kube-api-access-8zl96\") pod \"coredns-674b8bbfcf-c6mwg\" (UID: \"1734759f-91db-4792-a5b1-49dc654f13fd\") " pod="kube-system/coredns-674b8bbfcf-c6mwg" Dec 12 17:48:34.813501 systemd[1]: Created slice kubepods-besteffort-pod6b58ec45_43db_4087_8da6_607a024a3bf6.slice - libcontainer container kubepods-besteffort-pod6b58ec45_43db_4087_8da6_607a024a3bf6.slice. Dec 12 17:48:34.820700 systemd[1]: Created slice kubepods-besteffort-podbbff352b_faf1_4f78_9dff_cf0cfd8ed7b8.slice - libcontainer container kubepods-besteffort-podbbff352b_faf1_4f78_9dff_cf0cfd8ed7b8.slice. Dec 12 17:48:34.828288 systemd[1]: Created slice kubepods-besteffort-podb9790108_193f_4a45_8176_db9ea1c52e1d.slice - libcontainer container kubepods-besteffort-podb9790108_193f_4a45_8176_db9ea1c52e1d.slice. Dec 12 17:48:34.835841 systemd[1]: Created slice kubepods-besteffort-pod1212be5b_3ca2_46ca_81c7_0e9f1f64b2a7.slice - libcontainer container kubepods-besteffort-pod1212be5b_3ca2_46ca_81c7_0e9f1f64b2a7.slice. Dec 12 17:48:34.840936 systemd[1]: Created slice kubepods-besteffort-pod9cda2fbf_77ea_4fc5_a7c0_77df9ce96ffe.slice - libcontainer container kubepods-besteffort-pod9cda2fbf_77ea_4fc5_a7c0_77df9ce96ffe.slice. Dec 12 17:48:34.908430 kubelet[2647]: I1212 17:48:34.908318 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btk8x\" (UniqueName: \"kubernetes.io/projected/b9790108-193f-4a45-8176-db9ea1c52e1d-kube-api-access-btk8x\") pod \"whisker-85745fbb58-7h7pj\" (UID: \"b9790108-193f-4a45-8176-db9ea1c52e1d\") " pod="calico-system/whisker-85745fbb58-7h7pj" Dec 12 17:48:34.908430 kubelet[2647]: I1212 17:48:34.908374 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsvqt\" (UniqueName: \"kubernetes.io/projected/9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe-kube-api-access-qsvqt\") pod \"goldmane-666569f655-nrs7m\" (UID: \"9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe\") " pod="calico-system/goldmane-666569f655-nrs7m" Dec 12 17:48:34.908658 kubelet[2647]: I1212 17:48:34.908631 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7-calico-apiserver-certs\") pod \"calico-apiserver-84f498ff75-4h6kh\" (UID: \"1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7\") " pod="calico-apiserver/calico-apiserver-84f498ff75-4h6kh" Dec 12 17:48:34.908866 kubelet[2647]: I1212 17:48:34.908844 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe-goldmane-ca-bundle\") pod \"goldmane-666569f655-nrs7m\" (UID: \"9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe\") " pod="calico-system/goldmane-666569f655-nrs7m" Dec 12 17:48:34.909066 kubelet[2647]: I1212 17:48:34.909026 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b58ec45-43db-4087-8da6-607a024a3bf6-tigera-ca-bundle\") pod \"calico-kube-controllers-5dcc76bd57-jj9xs\" (UID: \"6b58ec45-43db-4087-8da6-607a024a3bf6\") " pod="calico-system/calico-kube-controllers-5dcc76bd57-jj9xs" Dec 12 17:48:34.909194 kubelet[2647]: I1212 17:48:34.909156 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhh7j\" (UniqueName: \"kubernetes.io/projected/1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7-kube-api-access-bhh7j\") pod \"calico-apiserver-84f498ff75-4h6kh\" (UID: \"1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7\") " pod="calico-apiserver/calico-apiserver-84f498ff75-4h6kh" Dec 12 17:48:34.909332 kubelet[2647]: I1212 17:48:34.909289 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe-config\") pod \"goldmane-666569f655-nrs7m\" (UID: \"9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe\") " pod="calico-system/goldmane-666569f655-nrs7m" Dec 12 17:48:34.909496 kubelet[2647]: I1212 17:48:34.909422 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8-calico-apiserver-certs\") pod \"calico-apiserver-84f498ff75-dpfb7\" (UID: \"bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8\") " pod="calico-apiserver/calico-apiserver-84f498ff75-dpfb7" Dec 12 17:48:34.909622 kubelet[2647]: I1212 17:48:34.909486 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hxbt\" (UniqueName: \"kubernetes.io/projected/bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8-kube-api-access-5hxbt\") pod \"calico-apiserver-84f498ff75-dpfb7\" (UID: \"bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8\") " pod="calico-apiserver/calico-apiserver-84f498ff75-dpfb7" Dec 12 17:48:34.909622 kubelet[2647]: I1212 17:48:34.909590 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8v9d\" (UniqueName: \"kubernetes.io/projected/6b58ec45-43db-4087-8da6-607a024a3bf6-kube-api-access-s8v9d\") pod \"calico-kube-controllers-5dcc76bd57-jj9xs\" (UID: \"6b58ec45-43db-4087-8da6-607a024a3bf6\") " pod="calico-system/calico-kube-controllers-5dcc76bd57-jj9xs" Dec 12 17:48:34.910611 kubelet[2647]: I1212 17:48:34.909744 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9790108-193f-4a45-8176-db9ea1c52e1d-whisker-backend-key-pair\") pod \"whisker-85745fbb58-7h7pj\" (UID: \"b9790108-193f-4a45-8176-db9ea1c52e1d\") " pod="calico-system/whisker-85745fbb58-7h7pj" Dec 12 17:48:34.910611 kubelet[2647]: I1212 17:48:34.909795 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe-goldmane-key-pair\") pod \"goldmane-666569f655-nrs7m\" (UID: \"9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe\") " pod="calico-system/goldmane-666569f655-nrs7m" Dec 12 17:48:34.910611 kubelet[2647]: I1212 17:48:34.909829 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9790108-193f-4a45-8176-db9ea1c52e1d-whisker-ca-bundle\") pod \"whisker-85745fbb58-7h7pj\" (UID: \"b9790108-193f-4a45-8176-db9ea1c52e1d\") " pod="calico-system/whisker-85745fbb58-7h7pj" Dec 12 17:48:35.097690 containerd[1498]: time="2025-12-12T17:48:35.097559607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c6mwg,Uid:1734759f-91db-4792-a5b1-49dc654f13fd,Namespace:kube-system,Attempt:0,}" Dec 12 17:48:35.108550 containerd[1498]: time="2025-12-12T17:48:35.108472224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5mf4q,Uid:e923402f-6f38-4464-85a3-ba4ab295e18c,Namespace:kube-system,Attempt:0,}" Dec 12 17:48:35.120034 containerd[1498]: time="2025-12-12T17:48:35.119983649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcc76bd57-jj9xs,Uid:6b58ec45-43db-4087-8da6-607a024a3bf6,Namespace:calico-system,Attempt:0,}" Dec 12 17:48:35.125595 containerd[1498]: time="2025-12-12T17:48:35.125555320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f498ff75-dpfb7,Uid:bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:48:35.132990 containerd[1498]: time="2025-12-12T17:48:35.132928773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85745fbb58-7h7pj,Uid:b9790108-193f-4a45-8176-db9ea1c52e1d,Namespace:calico-system,Attempt:0,}" Dec 12 17:48:35.140322 containerd[1498]: time="2025-12-12T17:48:35.140264625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f498ff75-4h6kh,Uid:1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:48:35.144175 containerd[1498]: time="2025-12-12T17:48:35.144080353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nrs7m,Uid:9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe,Namespace:calico-system,Attempt:0,}" Dec 12 17:48:35.240104 containerd[1498]: time="2025-12-12T17:48:35.240048603Z" level=error msg="Failed to destroy network for sandbox \"79a68b3b51679d5f9a04725d54bb264fc893d03147a6518d904ac238c572006c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.243996 containerd[1498]: time="2025-12-12T17:48:35.243645609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f498ff75-dpfb7,Uid:bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"79a68b3b51679d5f9a04725d54bb264fc893d03147a6518d904ac238c572006c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.246208 kubelet[2647]: E1212 17:48:35.246145 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79a68b3b51679d5f9a04725d54bb264fc893d03147a6518d904ac238c572006c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.251900 kubelet[2647]: E1212 17:48:35.251595 2647 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79a68b3b51679d5f9a04725d54bb264fc893d03147a6518d904ac238c572006c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84f498ff75-dpfb7" Dec 12 17:48:35.251992 containerd[1498]: time="2025-12-12T17:48:35.251719271Z" level=error msg="Failed to destroy network for sandbox \"4cfbcecf0b37e57d87f9016bcee708d16d6a16c11c4bd3fb5b28a4ec9ad3d525\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.252112 kubelet[2647]: E1212 17:48:35.252079 2647 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79a68b3b51679d5f9a04725d54bb264fc893d03147a6518d904ac238c572006c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84f498ff75-dpfb7" Dec 12 17:48:35.252320 kubelet[2647]: E1212 17:48:35.252245 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84f498ff75-dpfb7_calico-apiserver(bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84f498ff75-dpfb7_calico-apiserver(bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79a68b3b51679d5f9a04725d54bb264fc893d03147a6518d904ac238c572006c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84f498ff75-dpfb7" podUID="bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8" Dec 12 17:48:35.253072 containerd[1498]: time="2025-12-12T17:48:35.253021567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nrs7m,Uid:9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cfbcecf0b37e57d87f9016bcee708d16d6a16c11c4bd3fb5b28a4ec9ad3d525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.253333 kubelet[2647]: E1212 17:48:35.253299 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cfbcecf0b37e57d87f9016bcee708d16d6a16c11c4bd3fb5b28a4ec9ad3d525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.253380 kubelet[2647]: E1212 17:48:35.253356 2647 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cfbcecf0b37e57d87f9016bcee708d16d6a16c11c4bd3fb5b28a4ec9ad3d525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nrs7m" Dec 12 17:48:35.253410 kubelet[2647]: E1212 17:48:35.253375 2647 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cfbcecf0b37e57d87f9016bcee708d16d6a16c11c4bd3fb5b28a4ec9ad3d525\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nrs7m" Dec 12 17:48:35.253434 kubelet[2647]: E1212 17:48:35.253415 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nrs7m_calico-system(9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nrs7m_calico-system(9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cfbcecf0b37e57d87f9016bcee708d16d6a16c11c4bd3fb5b28a4ec9ad3d525\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nrs7m" podUID="9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe" Dec 12 17:48:35.261230 containerd[1498]: time="2025-12-12T17:48:35.260901026Z" level=error msg="Failed to destroy network for sandbox \"39907bb4df4444239b6a7d410f5a9b6a9821d34ff1966181646ef230fea8cedc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.262903 containerd[1498]: time="2025-12-12T17:48:35.262862011Z" level=error msg="Failed to destroy network for sandbox \"f3c0c0e2111b4316dc490da29082b6807e13e89d020e9c88085dfe11b4145f62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.263052 containerd[1498]: time="2025-12-12T17:48:35.263011533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c6mwg,Uid:1734759f-91db-4792-a5b1-49dc654f13fd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39907bb4df4444239b6a7d410f5a9b6a9821d34ff1966181646ef230fea8cedc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.263298 kubelet[2647]: E1212 17:48:35.263231 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39907bb4df4444239b6a7d410f5a9b6a9821d34ff1966181646ef230fea8cedc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.263382 kubelet[2647]: E1212 17:48:35.263318 2647 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39907bb4df4444239b6a7d410f5a9b6a9821d34ff1966181646ef230fea8cedc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c6mwg" Dec 12 17:48:35.263437 kubelet[2647]: E1212 17:48:35.263384 2647 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39907bb4df4444239b6a7d410f5a9b6a9821d34ff1966181646ef230fea8cedc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c6mwg" Dec 12 17:48:35.263477 kubelet[2647]: E1212 17:48:35.263446 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-c6mwg_kube-system(1734759f-91db-4792-a5b1-49dc654f13fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-c6mwg_kube-system(1734759f-91db-4792-a5b1-49dc654f13fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39907bb4df4444239b6a7d410f5a9b6a9821d34ff1966181646ef230fea8cedc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-c6mwg" podUID="1734759f-91db-4792-a5b1-49dc654f13fd" Dec 12 17:48:35.263865 containerd[1498]: time="2025-12-12T17:48:35.263800423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcc76bd57-jj9xs,Uid:6b58ec45-43db-4087-8da6-607a024a3bf6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3c0c0e2111b4316dc490da29082b6807e13e89d020e9c88085dfe11b4145f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.264976 kubelet[2647]: E1212 17:48:35.264026 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3c0c0e2111b4316dc490da29082b6807e13e89d020e9c88085dfe11b4145f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.264976 kubelet[2647]: E1212 17:48:35.264070 2647 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3c0c0e2111b4316dc490da29082b6807e13e89d020e9c88085dfe11b4145f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dcc76bd57-jj9xs" Dec 12 17:48:35.264976 kubelet[2647]: E1212 17:48:35.264089 2647 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3c0c0e2111b4316dc490da29082b6807e13e89d020e9c88085dfe11b4145f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dcc76bd57-jj9xs" Dec 12 17:48:35.265106 kubelet[2647]: E1212 17:48:35.264128 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dcc76bd57-jj9xs_calico-system(6b58ec45-43db-4087-8da6-607a024a3bf6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dcc76bd57-jj9xs_calico-system(6b58ec45-43db-4087-8da6-607a024a3bf6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3c0c0e2111b4316dc490da29082b6807e13e89d020e9c88085dfe11b4145f62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dcc76bd57-jj9xs" podUID="6b58ec45-43db-4087-8da6-607a024a3bf6" Dec 12 17:48:35.265264 containerd[1498]: time="2025-12-12T17:48:35.265231641Z" level=error msg="Failed to destroy network for sandbox \"c2d5634980e8bbbc6559babd658326e25ae7d3dabcf5b660d7ab586ca8c0b776\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.266336 containerd[1498]: time="2025-12-12T17:48:35.266298255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85745fbb58-7h7pj,Uid:b9790108-193f-4a45-8176-db9ea1c52e1d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d5634980e8bbbc6559babd658326e25ae7d3dabcf5b660d7ab586ca8c0b776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.266636 kubelet[2647]: E1212 17:48:35.266607 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d5634980e8bbbc6559babd658326e25ae7d3dabcf5b660d7ab586ca8c0b776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.266940 kubelet[2647]: E1212 17:48:35.266772 2647 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d5634980e8bbbc6559babd658326e25ae7d3dabcf5b660d7ab586ca8c0b776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85745fbb58-7h7pj" Dec 12 17:48:35.266994 kubelet[2647]: E1212 17:48:35.266939 2647 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2d5634980e8bbbc6559babd658326e25ae7d3dabcf5b660d7ab586ca8c0b776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85745fbb58-7h7pj" Dec 12 17:48:35.267020 kubelet[2647]: E1212 17:48:35.266982 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-85745fbb58-7h7pj_calico-system(b9790108-193f-4a45-8176-db9ea1c52e1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-85745fbb58-7h7pj_calico-system(b9790108-193f-4a45-8176-db9ea1c52e1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2d5634980e8bbbc6559babd658326e25ae7d3dabcf5b660d7ab586ca8c0b776\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85745fbb58-7h7pj" podUID="b9790108-193f-4a45-8176-db9ea1c52e1d" Dec 12 17:48:35.269515 containerd[1498]: time="2025-12-12T17:48:35.269481135Z" level=error msg="Failed to destroy network for sandbox \"1c00eede4f2e48344c08b85f5efa84c0ec31da3944f26c3d42b478e94d2b458f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.270790 containerd[1498]: time="2025-12-12T17:48:35.270655069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5mf4q,Uid:e923402f-6f38-4464-85a3-ba4ab295e18c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c00eede4f2e48344c08b85f5efa84c0ec31da3944f26c3d42b478e94d2b458f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.270884 kubelet[2647]: E1212 17:48:35.270833 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c00eede4f2e48344c08b85f5efa84c0ec31da3944f26c3d42b478e94d2b458f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.270916 kubelet[2647]: E1212 17:48:35.270879 2647 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c00eede4f2e48344c08b85f5efa84c0ec31da3944f26c3d42b478e94d2b458f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5mf4q" Dec 12 17:48:35.270916 kubelet[2647]: E1212 17:48:35.270901 2647 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c00eede4f2e48344c08b85f5efa84c0ec31da3944f26c3d42b478e94d2b458f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5mf4q" Dec 12 17:48:35.270970 kubelet[2647]: E1212 17:48:35.270933 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5mf4q_kube-system(e923402f-6f38-4464-85a3-ba4ab295e18c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5mf4q_kube-system(e923402f-6f38-4464-85a3-ba4ab295e18c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c00eede4f2e48344c08b85f5efa84c0ec31da3944f26c3d42b478e94d2b458f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5mf4q" podUID="e923402f-6f38-4464-85a3-ba4ab295e18c" Dec 12 17:48:35.274737 containerd[1498]: time="2025-12-12T17:48:35.274659560Z" level=error msg="Failed to destroy network for sandbox \"cf7c518417905d0e2419ee5d1faade258347d491122d148d1b62daa808cac058\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.275646 containerd[1498]: time="2025-12-12T17:48:35.275608612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f498ff75-4h6kh,Uid:1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf7c518417905d0e2419ee5d1faade258347d491122d148d1b62daa808cac058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.275980 kubelet[2647]: E1212 17:48:35.275851 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf7c518417905d0e2419ee5d1faade258347d491122d148d1b62daa808cac058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.275980 kubelet[2647]: E1212 17:48:35.275926 2647 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf7c518417905d0e2419ee5d1faade258347d491122d148d1b62daa808cac058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84f498ff75-4h6kh" Dec 12 17:48:35.275980 kubelet[2647]: E1212 17:48:35.275946 2647 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf7c518417905d0e2419ee5d1faade258347d491122d148d1b62daa808cac058\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84f498ff75-4h6kh" Dec 12 17:48:35.276213 kubelet[2647]: E1212 17:48:35.276112 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84f498ff75-4h6kh_calico-apiserver(1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84f498ff75-4h6kh_calico-apiserver(1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf7c518417905d0e2419ee5d1faade258347d491122d148d1b62daa808cac058\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84f498ff75-4h6kh" podUID="1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7" Dec 12 17:48:35.319159 systemd[1]: Created slice kubepods-besteffort-podd3e06765_0219_4a57_abc4_db29c031f701.slice - libcontainer container kubepods-besteffort-podd3e06765_0219_4a57_abc4_db29c031f701.slice. Dec 12 17:48:35.321747 containerd[1498]: time="2025-12-12T17:48:35.321697913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pjknm,Uid:d3e06765-0219-4a57-abc4-db29c031f701,Namespace:calico-system,Attempt:0,}" Dec 12 17:48:35.364842 containerd[1498]: time="2025-12-12T17:48:35.364701255Z" level=error msg="Failed to destroy network for sandbox \"5dfa47fe25f9d832496fd89e22b162842c1631958757d588639b306397bc7779\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.366660 containerd[1498]: time="2025-12-12T17:48:35.366628200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pjknm,Uid:d3e06765-0219-4a57-abc4-db29c031f701,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfa47fe25f9d832496fd89e22b162842c1631958757d588639b306397bc7779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.366927 kubelet[2647]: E1212 17:48:35.366884 2647 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfa47fe25f9d832496fd89e22b162842c1631958757d588639b306397bc7779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:48:35.366973 kubelet[2647]: E1212 17:48:35.366951 2647 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfa47fe25f9d832496fd89e22b162842c1631958757d588639b306397bc7779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pjknm" Dec 12 17:48:35.367012 kubelet[2647]: E1212 17:48:35.366972 2647 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dfa47fe25f9d832496fd89e22b162842c1631958757d588639b306397bc7779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pjknm" Dec 12 17:48:35.367051 kubelet[2647]: E1212 17:48:35.367023 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pjknm_calico-system(d3e06765-0219-4a57-abc4-db29c031f701)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pjknm_calico-system(d3e06765-0219-4a57-abc4-db29c031f701)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dfa47fe25f9d832496fd89e22b162842c1631958757d588639b306397bc7779\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pjknm" podUID="d3e06765-0219-4a57-abc4-db29c031f701" Dec 12 17:48:35.406220 containerd[1498]: time="2025-12-12T17:48:35.406172018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:48:38.234307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount365567882.mount: Deactivated successfully. Dec 12 17:48:38.533400 containerd[1498]: time="2025-12-12T17:48:38.533209268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:38.533907 containerd[1498]: time="2025-12-12T17:48:38.533882036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 12 17:48:38.534929 containerd[1498]: time="2025-12-12T17:48:38.534888687Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:38.536880 containerd[1498]: time="2025-12-12T17:48:38.536842310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:48:38.537380 containerd[1498]: time="2025-12-12T17:48:38.537347195Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.131130776s" Dec 12 17:48:38.537380 containerd[1498]: time="2025-12-12T17:48:38.537373916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:48:38.561766 containerd[1498]: time="2025-12-12T17:48:38.561723912Z" level=info msg="CreateContainer within sandbox \"ae420d9e1e94b74ff73efa4ea92f878b42d233c5b2b375a4a290ca0b4cb69448\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:48:38.611758 containerd[1498]: time="2025-12-12T17:48:38.610997111Z" level=info msg="Container e228a82d67c2a97b6f9bbf3b5001835d84d028270a5ebf4108cb96df2eeee27a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:48:38.621024 containerd[1498]: time="2025-12-12T17:48:38.620885504Z" level=info msg="CreateContainer within sandbox \"ae420d9e1e94b74ff73efa4ea92f878b42d233c5b2b375a4a290ca0b4cb69448\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e228a82d67c2a97b6f9bbf3b5001835d84d028270a5ebf4108cb96df2eeee27a\"" Dec 12 17:48:38.621814 containerd[1498]: time="2025-12-12T17:48:38.621784114Z" level=info msg="StartContainer for \"e228a82d67c2a97b6f9bbf3b5001835d84d028270a5ebf4108cb96df2eeee27a\"" Dec 12 17:48:38.623562 containerd[1498]: time="2025-12-12T17:48:38.623526053Z" level=info msg="connecting to shim e228a82d67c2a97b6f9bbf3b5001835d84d028270a5ebf4108cb96df2eeee27a" address="unix:///run/containerd/s/52a9deabbeae5b791726adcc32ea1fe5e709d9d715be91a37cb2343dc0b81804" protocol=ttrpc version=3 Dec 12 17:48:38.643905 systemd[1]: Started cri-containerd-e228a82d67c2a97b6f9bbf3b5001835d84d028270a5ebf4108cb96df2eeee27a.scope - libcontainer container e228a82d67c2a97b6f9bbf3b5001835d84d028270a5ebf4108cb96df2eeee27a. Dec 12 17:48:38.720335 containerd[1498]: time="2025-12-12T17:48:38.719941348Z" level=info msg="StartContainer for \"e228a82d67c2a97b6f9bbf3b5001835d84d028270a5ebf4108cb96df2eeee27a\" returns successfully" Dec 12 17:48:38.837922 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:48:38.838021 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:48:39.044267 kubelet[2647]: I1212 17:48:39.043854 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9790108-193f-4a45-8176-db9ea1c52e1d-whisker-ca-bundle\") pod \"b9790108-193f-4a45-8176-db9ea1c52e1d\" (UID: \"b9790108-193f-4a45-8176-db9ea1c52e1d\") " Dec 12 17:48:39.044267 kubelet[2647]: I1212 17:48:39.043906 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9790108-193f-4a45-8176-db9ea1c52e1d-whisker-backend-key-pair\") pod \"b9790108-193f-4a45-8176-db9ea1c52e1d\" (UID: \"b9790108-193f-4a45-8176-db9ea1c52e1d\") " Dec 12 17:48:39.044267 kubelet[2647]: I1212 17:48:39.043936 2647 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btk8x\" (UniqueName: \"kubernetes.io/projected/b9790108-193f-4a45-8176-db9ea1c52e1d-kube-api-access-btk8x\") pod \"b9790108-193f-4a45-8176-db9ea1c52e1d\" (UID: \"b9790108-193f-4a45-8176-db9ea1c52e1d\") " Dec 12 17:48:39.070904 kubelet[2647]: I1212 17:48:39.070861 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9790108-193f-4a45-8176-db9ea1c52e1d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b9790108-193f-4a45-8176-db9ea1c52e1d" (UID: "b9790108-193f-4a45-8176-db9ea1c52e1d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:48:39.071327 kubelet[2647]: I1212 17:48:39.071294 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9790108-193f-4a45-8176-db9ea1c52e1d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b9790108-193f-4a45-8176-db9ea1c52e1d" (UID: "b9790108-193f-4a45-8176-db9ea1c52e1d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:48:39.071728 kubelet[2647]: I1212 17:48:39.071685 2647 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9790108-193f-4a45-8176-db9ea1c52e1d-kube-api-access-btk8x" (OuterVolumeSpecName: "kube-api-access-btk8x") pod "b9790108-193f-4a45-8176-db9ea1c52e1d" (UID: "b9790108-193f-4a45-8176-db9ea1c52e1d"). InnerVolumeSpecName "kube-api-access-btk8x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:48:39.145212 kubelet[2647]: I1212 17:48:39.145165 2647 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9790108-193f-4a45-8176-db9ea1c52e1d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 12 17:48:39.145212 kubelet[2647]: I1212 17:48:39.145196 2647 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9790108-193f-4a45-8176-db9ea1c52e1d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 12 17:48:39.145212 kubelet[2647]: I1212 17:48:39.145207 2647 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-btk8x\" (UniqueName: \"kubernetes.io/projected/b9790108-193f-4a45-8176-db9ea1c52e1d-kube-api-access-btk8x\") on node \"localhost\" DevicePath \"\"" Dec 12 17:48:39.235210 systemd[1]: var-lib-kubelet-pods-b9790108\x2d193f\x2d4a45\x2d8176\x2ddb9ea1c52e1d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbtk8x.mount: Deactivated successfully. Dec 12 17:48:39.235316 systemd[1]: var-lib-kubelet-pods-b9790108\x2d193f\x2d4a45\x2d8176\x2ddb9ea1c52e1d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:48:39.422798 systemd[1]: Removed slice kubepods-besteffort-podb9790108_193f_4a45_8176_db9ea1c52e1d.slice - libcontainer container kubepods-besteffort-podb9790108_193f_4a45_8176_db9ea1c52e1d.slice. Dec 12 17:48:39.455360 kubelet[2647]: I1212 17:48:39.454943 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dhjrk" podStartSLOduration=2.003677281 podStartE2EDuration="12.440817688s" podCreationTimestamp="2025-12-12 17:48:27 +0000 UTC" firstStartedPulling="2025-12-12 17:48:28.100901676 +0000 UTC m=+21.886483890" lastFinishedPulling="2025-12-12 17:48:38.538042083 +0000 UTC m=+32.323624297" observedRunningTime="2025-12-12 17:48:39.4382713 +0000 UTC m=+33.223853514" watchObservedRunningTime="2025-12-12 17:48:39.440817688 +0000 UTC m=+33.226399902" Dec 12 17:48:39.497014 systemd[1]: Created slice kubepods-besteffort-pod89fc1364_299e_46d0_8995_62a57f819657.slice - libcontainer container kubepods-besteffort-pod89fc1364_299e_46d0_8995_62a57f819657.slice. Dec 12 17:48:39.557807 kubelet[2647]: I1212 17:48:39.557765 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/89fc1364-299e-46d0-8995-62a57f819657-whisker-backend-key-pair\") pod \"whisker-775b66bf7b-w8t5k\" (UID: \"89fc1364-299e-46d0-8995-62a57f819657\") " pod="calico-system/whisker-775b66bf7b-w8t5k" Dec 12 17:48:39.558001 kubelet[2647]: I1212 17:48:39.557987 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89fc1364-299e-46d0-8995-62a57f819657-whisker-ca-bundle\") pod \"whisker-775b66bf7b-w8t5k\" (UID: \"89fc1364-299e-46d0-8995-62a57f819657\") " pod="calico-system/whisker-775b66bf7b-w8t5k" Dec 12 17:48:39.558085 kubelet[2647]: I1212 17:48:39.558073 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2b4h\" (UniqueName: \"kubernetes.io/projected/89fc1364-299e-46d0-8995-62a57f819657-kube-api-access-p2b4h\") pod \"whisker-775b66bf7b-w8t5k\" (UID: \"89fc1364-299e-46d0-8995-62a57f819657\") " pod="calico-system/whisker-775b66bf7b-w8t5k" Dec 12 17:48:39.801541 containerd[1498]: time="2025-12-12T17:48:39.801494609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775b66bf7b-w8t5k,Uid:89fc1364-299e-46d0-8995-62a57f819657,Namespace:calico-system,Attempt:0,}" Dec 12 17:48:39.958200 systemd-networkd[1409]: cali6ac04fb8407: Link UP Dec 12 17:48:39.958778 systemd-networkd[1409]: cali6ac04fb8407: Gained carrier Dec 12 17:48:39.973552 containerd[1498]: 2025-12-12 17:48:39.821 [INFO][3790] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:48:39.973552 containerd[1498]: 2025-12-12 17:48:39.852 [INFO][3790] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--775b66bf7b--w8t5k-eth0 whisker-775b66bf7b- calico-system 89fc1364-299e-46d0-8995-62a57f819657 869 0 2025-12-12 17:48:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:775b66bf7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-775b66bf7b-w8t5k eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6ac04fb8407 [] [] }} ContainerID="1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" Namespace="calico-system" Pod="whisker-775b66bf7b-w8t5k" WorkloadEndpoint="localhost-k8s-whisker--775b66bf7b--w8t5k-" Dec 12 17:48:39.973552 containerd[1498]: 2025-12-12 17:48:39.852 [INFO][3790] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" Namespace="calico-system" Pod="whisker-775b66bf7b-w8t5k" WorkloadEndpoint="localhost-k8s-whisker--775b66bf7b--w8t5k-eth0" Dec 12 17:48:39.973552 containerd[1498]: 2025-12-12 17:48:39.913 [INFO][3804] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" HandleID="k8s-pod-network.1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" Workload="localhost-k8s-whisker--775b66bf7b--w8t5k-eth0" Dec 12 17:48:39.973769 containerd[1498]: 2025-12-12 17:48:39.913 [INFO][3804] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" HandleID="k8s-pod-network.1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" Workload="localhost-k8s-whisker--775b66bf7b--w8t5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a1550), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-775b66bf7b-w8t5k", "timestamp":"2025-12-12 17:48:39.913635601 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:48:39.973769 containerd[1498]: 2025-12-12 17:48:39.913 [INFO][3804] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:48:39.973769 containerd[1498]: 2025-12-12 17:48:39.914 [INFO][3804] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:48:39.973769 containerd[1498]: 2025-12-12 17:48:39.914 [INFO][3804] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:48:39.973769 containerd[1498]: 2025-12-12 17:48:39.924 [INFO][3804] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" host="localhost" Dec 12 17:48:39.973769 containerd[1498]: 2025-12-12 17:48:39.930 [INFO][3804] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:48:39.973769 containerd[1498]: 2025-12-12 17:48:39.934 [INFO][3804] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:48:39.973769 containerd[1498]: 2025-12-12 17:48:39.935 [INFO][3804] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:39.973769 containerd[1498]: 2025-12-12 17:48:39.937 [INFO][3804] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:39.973769 containerd[1498]: 2025-12-12 17:48:39.937 [INFO][3804] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" host="localhost" Dec 12 17:48:39.973971 containerd[1498]: 2025-12-12 17:48:39.939 [INFO][3804] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc Dec 12 17:48:39.973971 containerd[1498]: 2025-12-12 17:48:39.942 [INFO][3804] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" host="localhost" Dec 12 17:48:39.973971 containerd[1498]: 2025-12-12 17:48:39.948 [INFO][3804] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" host="localhost" Dec 12 17:48:39.973971 containerd[1498]: 2025-12-12 17:48:39.948 [INFO][3804] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" host="localhost" Dec 12 17:48:39.973971 containerd[1498]: 2025-12-12 17:48:39.948 [INFO][3804] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:48:39.973971 containerd[1498]: 2025-12-12 17:48:39.948 [INFO][3804] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" HandleID="k8s-pod-network.1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" Workload="localhost-k8s-whisker--775b66bf7b--w8t5k-eth0" Dec 12 17:48:39.974112 containerd[1498]: 2025-12-12 17:48:39.950 [INFO][3790] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" Namespace="calico-system" Pod="whisker-775b66bf7b-w8t5k" WorkloadEndpoint="localhost-k8s-whisker--775b66bf7b--w8t5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--775b66bf7b--w8t5k-eth0", GenerateName:"whisker-775b66bf7b-", Namespace:"calico-system", SelfLink:"", UID:"89fc1364-299e-46d0-8995-62a57f819657", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775b66bf7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-775b66bf7b-w8t5k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6ac04fb8407", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:39.974112 containerd[1498]: 2025-12-12 17:48:39.950 [INFO][3790] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" Namespace="calico-system" Pod="whisker-775b66bf7b-w8t5k" WorkloadEndpoint="localhost-k8s-whisker--775b66bf7b--w8t5k-eth0" Dec 12 17:48:39.974194 containerd[1498]: 2025-12-12 17:48:39.950 [INFO][3790] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ac04fb8407 ContainerID="1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" Namespace="calico-system" Pod="whisker-775b66bf7b-w8t5k" WorkloadEndpoint="localhost-k8s-whisker--775b66bf7b--w8t5k-eth0" Dec 12 17:48:39.974194 containerd[1498]: 2025-12-12 17:48:39.959 [INFO][3790] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" Namespace="calico-system" Pod="whisker-775b66bf7b-w8t5k" WorkloadEndpoint="localhost-k8s-whisker--775b66bf7b--w8t5k-eth0" Dec 12 17:48:39.974233 containerd[1498]: 2025-12-12 17:48:39.960 [INFO][3790] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" Namespace="calico-system" Pod="whisker-775b66bf7b-w8t5k" WorkloadEndpoint="localhost-k8s-whisker--775b66bf7b--w8t5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--775b66bf7b--w8t5k-eth0", GenerateName:"whisker-775b66bf7b-", Namespace:"calico-system", SelfLink:"", UID:"89fc1364-299e-46d0-8995-62a57f819657", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"775b66bf7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc", Pod:"whisker-775b66bf7b-w8t5k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6ac04fb8407", MAC:"c2:fc:b5:13:8b:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:39.974285 containerd[1498]: 2025-12-12 17:48:39.971 [INFO][3790] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" Namespace="calico-system" Pod="whisker-775b66bf7b-w8t5k" WorkloadEndpoint="localhost-k8s-whisker--775b66bf7b--w8t5k-eth0" Dec 12 17:48:40.017939 containerd[1498]: time="2025-12-12T17:48:40.017890020Z" level=info msg="connecting to shim 1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc" address="unix:///run/containerd/s/0481a01b5f16c73efcc3f19b99b41874010f22991395e2913c7103d6bff2d0cf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:40.056913 systemd[1]: Started cri-containerd-1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc.scope - libcontainer container 1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc. Dec 12 17:48:40.068981 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:48:40.102565 containerd[1498]: time="2025-12-12T17:48:40.102515560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-775b66bf7b-w8t5k,Uid:89fc1364-299e-46d0-8995-62a57f819657,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e9ca1cfd957d11d5c818daf4115a5ce3079964e3e892d61a078a26fb39f09fc\"" Dec 12 17:48:40.104272 containerd[1498]: time="2025-12-12T17:48:40.104206058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:48:40.311083 kubelet[2647]: I1212 17:48:40.310988 2647 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9790108-193f-4a45-8176-db9ea1c52e1d" path="/var/lib/kubelet/pods/b9790108-193f-4a45-8176-db9ea1c52e1d/volumes" Dec 12 17:48:40.321783 containerd[1498]: time="2025-12-12T17:48:40.321590650Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:48:40.351213 containerd[1498]: time="2025-12-12T17:48:40.351130404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:48:40.351592 containerd[1498]: time="2025-12-12T17:48:40.351177885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:48:40.353612 kubelet[2647]: E1212 17:48:40.353495 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:48:40.355028 kubelet[2647]: E1212 17:48:40.354989 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:48:40.360339 kubelet[2647]: E1212 17:48:40.360285 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e411dfaba746472f812751b5f961c7ad,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p2b4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775b66bf7b-w8t5k_calico-system(89fc1364-299e-46d0-8995-62a57f819657): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:48:40.363374 containerd[1498]: time="2025-12-12T17:48:40.363297214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:48:40.423996 kubelet[2647]: I1212 17:48:40.423648 2647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:48:40.570808 containerd[1498]: time="2025-12-12T17:48:40.570465937Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:48:40.572559 containerd[1498]: time="2025-12-12T17:48:40.572449718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:48:40.572559 containerd[1498]: time="2025-12-12T17:48:40.572515399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:48:40.572877 kubelet[2647]: E1212 17:48:40.572809 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:48:40.572877 kubelet[2647]: E1212 17:48:40.572869 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:48:40.573056 kubelet[2647]: E1212 17:48:40.573007 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2b4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775b66bf7b-w8t5k_calico-system(89fc1364-299e-46d0-8995-62a57f819657): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:48:40.574696 kubelet[2647]: E1212 17:48:40.574619 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775b66bf7b-w8t5k" podUID="89fc1364-299e-46d0-8995-62a57f819657" Dec 12 17:48:41.425757 kubelet[2647]: E1212 17:48:41.425188 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775b66bf7b-w8t5k" podUID="89fc1364-299e-46d0-8995-62a57f819657" Dec 12 17:48:42.018908 systemd-networkd[1409]: cali6ac04fb8407: Gained IPv6LL Dec 12 17:48:43.833911 kubelet[2647]: I1212 17:48:43.833848 2647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:48:44.717466 systemd-networkd[1409]: vxlan.calico: Link UP Dec 12 17:48:44.717476 systemd-networkd[1409]: vxlan.calico: Gained carrier Dec 12 17:48:46.178865 systemd-networkd[1409]: vxlan.calico: Gained IPv6LL Dec 12 17:48:46.280751 systemd[1]: Started sshd@7-10.0.0.131:22-10.0.0.1:39590.service - OpenSSH per-connection server daemon (10.0.0.1:39590). Dec 12 17:48:46.309266 containerd[1498]: time="2025-12-12T17:48:46.309225991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pjknm,Uid:d3e06765-0219-4a57-abc4-db29c031f701,Namespace:calico-system,Attempt:0,}" Dec 12 17:48:46.351864 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 39590 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:48:46.353472 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:48:46.360943 systemd-logind[1476]: New session 8 of user core. Dec 12 17:48:46.365804 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:48:46.423653 systemd-networkd[1409]: caliafcb8cc8818: Link UP Dec 12 17:48:46.423829 systemd-networkd[1409]: caliafcb8cc8818: Gained carrier Dec 12 17:48:46.439648 containerd[1498]: 2025-12-12 17:48:46.355 [INFO][4193] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pjknm-eth0 csi-node-driver- calico-system d3e06765-0219-4a57-abc4-db29c031f701 714 0 2025-12-12 17:48:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pjknm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliafcb8cc8818 [] [] }} ContainerID="58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" Namespace="calico-system" Pod="csi-node-driver-pjknm" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjknm-" Dec 12 17:48:46.439648 containerd[1498]: 2025-12-12 17:48:46.355 [INFO][4193] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" Namespace="calico-system" Pod="csi-node-driver-pjknm" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjknm-eth0" Dec 12 17:48:46.439648 containerd[1498]: 2025-12-12 17:48:46.384 [INFO][4209] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" HandleID="k8s-pod-network.58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" Workload="localhost-k8s-csi--node--driver--pjknm-eth0" Dec 12 17:48:46.439907 containerd[1498]: 2025-12-12 17:48:46.384 [INFO][4209] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" HandleID="k8s-pod-network.58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" Workload="localhost-k8s-csi--node--driver--pjknm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c34c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pjknm", "timestamp":"2025-12-12 17:48:46.384651548 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:48:46.439907 containerd[1498]: 2025-12-12 17:48:46.384 [INFO][4209] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:48:46.439907 containerd[1498]: 2025-12-12 17:48:46.384 [INFO][4209] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:48:46.439907 containerd[1498]: 2025-12-12 17:48:46.384 [INFO][4209] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:48:46.439907 containerd[1498]: 2025-12-12 17:48:46.395 [INFO][4209] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" host="localhost" Dec 12 17:48:46.439907 containerd[1498]: 2025-12-12 17:48:46.399 [INFO][4209] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:48:46.439907 containerd[1498]: 2025-12-12 17:48:46.403 [INFO][4209] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:48:46.439907 containerd[1498]: 2025-12-12 17:48:46.405 [INFO][4209] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:46.439907 containerd[1498]: 2025-12-12 17:48:46.407 [INFO][4209] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:46.439907 containerd[1498]: 2025-12-12 17:48:46.407 [INFO][4209] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" host="localhost" Dec 12 17:48:46.440384 containerd[1498]: 2025-12-12 17:48:46.408 [INFO][4209] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409 Dec 12 17:48:46.440384 containerd[1498]: 2025-12-12 17:48:46.411 [INFO][4209] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" host="localhost" Dec 12 17:48:46.440384 containerd[1498]: 2025-12-12 17:48:46.417 [INFO][4209] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" host="localhost" Dec 12 17:48:46.440384 containerd[1498]: 2025-12-12 17:48:46.417 [INFO][4209] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" host="localhost" Dec 12 17:48:46.440384 containerd[1498]: 2025-12-12 17:48:46.417 [INFO][4209] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:48:46.440384 containerd[1498]: 2025-12-12 17:48:46.417 [INFO][4209] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" HandleID="k8s-pod-network.58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" Workload="localhost-k8s-csi--node--driver--pjknm-eth0" Dec 12 17:48:46.440500 containerd[1498]: 2025-12-12 17:48:46.420 [INFO][4193] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" Namespace="calico-system" Pod="csi-node-driver-pjknm" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjknm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pjknm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3e06765-0219-4a57-abc4-db29c031f701", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pjknm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliafcb8cc8818", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:46.440552 containerd[1498]: 2025-12-12 17:48:46.420 [INFO][4193] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" Namespace="calico-system" Pod="csi-node-driver-pjknm" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjknm-eth0" Dec 12 17:48:46.440552 containerd[1498]: 2025-12-12 17:48:46.420 [INFO][4193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliafcb8cc8818 ContainerID="58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" Namespace="calico-system" Pod="csi-node-driver-pjknm" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjknm-eth0" Dec 12 17:48:46.440552 containerd[1498]: 2025-12-12 17:48:46.423 [INFO][4193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" Namespace="calico-system" Pod="csi-node-driver-pjknm" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjknm-eth0" Dec 12 17:48:46.440609 containerd[1498]: 2025-12-12 17:48:46.423 [INFO][4193] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" Namespace="calico-system" Pod="csi-node-driver-pjknm" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjknm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pjknm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d3e06765-0219-4a57-abc4-db29c031f701", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409", Pod:"csi-node-driver-pjknm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliafcb8cc8818", MAC:"4e:22:bf:81:32:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:46.441631 containerd[1498]: 2025-12-12 17:48:46.436 [INFO][4193] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" Namespace="calico-system" Pod="csi-node-driver-pjknm" WorkloadEndpoint="localhost-k8s-csi--node--driver--pjknm-eth0" Dec 12 17:48:46.471501 containerd[1498]: time="2025-12-12T17:48:46.471459127Z" level=info msg="connecting to shim 58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409" address="unix:///run/containerd/s/9eadbbfa516facea232366edaa598dfa4649b0e91a329cc41dff9b3a1b9d4009" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:46.500873 systemd[1]: Started cri-containerd-58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409.scope - libcontainer container 58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409. Dec 12 17:48:46.514770 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:48:46.528611 containerd[1498]: time="2025-12-12T17:48:46.528575719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pjknm,Uid:d3e06765-0219-4a57-abc4-db29c031f701,Namespace:calico-system,Attempt:0,} returns sandbox id \"58cc548487206c0e347e8316b29505a99101d6bd674e01532983a1e845ddc409\"" Dec 12 17:48:46.530451 containerd[1498]: time="2025-12-12T17:48:46.530428736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:48:46.542965 sshd[4214]: Connection closed by 10.0.0.1 port 39590 Dec 12 17:48:46.543255 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Dec 12 17:48:46.546510 systemd[1]: sshd@7-10.0.0.131:22-10.0.0.1:39590.service: Deactivated successfully. Dec 12 17:48:46.549129 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:48:46.549850 systemd-logind[1476]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:48:46.551070 systemd-logind[1476]: Removed session 8. Dec 12 17:48:46.736341 containerd[1498]: time="2025-12-12T17:48:46.736277503Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:48:46.737209 containerd[1498]: time="2025-12-12T17:48:46.737160991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:48:46.737272 containerd[1498]: time="2025-12-12T17:48:46.737198511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:48:46.737415 kubelet[2647]: E1212 17:48:46.737371 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:48:46.737691 kubelet[2647]: E1212 17:48:46.737425 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:48:46.737691 kubelet[2647]: E1212 17:48:46.737548 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xb4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pjknm_calico-system(d3e06765-0219-4a57-abc4-db29c031f701): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:48:46.743182 containerd[1498]: time="2025-12-12T17:48:46.743154044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:48:46.951622 containerd[1498]: time="2025-12-12T17:48:46.951402833Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:48:46.952547 containerd[1498]: time="2025-12-12T17:48:46.952438122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:48:46.952547 containerd[1498]: time="2025-12-12T17:48:46.952517083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:48:46.952741 kubelet[2647]: E1212 17:48:46.952688 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:48:46.952896 kubelet[2647]: E1212 17:48:46.952755 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:48:46.952956 kubelet[2647]: E1212 17:48:46.952884 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xb4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pjknm_calico-system(d3e06765-0219-4a57-abc4-db29c031f701): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:48:46.954104 kubelet[2647]: E1212 17:48:46.954067 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pjknm" podUID="d3e06765-0219-4a57-abc4-db29c031f701" Dec 12 17:48:47.300773 kubelet[2647]: I1212 17:48:47.300733 2647 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:48:47.307628 containerd[1498]: time="2025-12-12T17:48:47.307579441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcc76bd57-jj9xs,Uid:6b58ec45-43db-4087-8da6-607a024a3bf6,Namespace:calico-system,Attempt:0,}" Dec 12 17:48:47.421118 systemd-networkd[1409]: cali71c3d9f75f4: Link UP Dec 12 17:48:47.421383 systemd-networkd[1409]: cali71c3d9f75f4: Gained carrier Dec 12 17:48:47.433848 containerd[1498]: 2025-12-12 17:48:47.348 [INFO][4285] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-eth0 calico-kube-controllers-5dcc76bd57- calico-system 6b58ec45-43db-4087-8da6-607a024a3bf6 804 0 2025-12-12 17:48:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5dcc76bd57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5dcc76bd57-jj9xs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali71c3d9f75f4 [] [] }} ContainerID="3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" Namespace="calico-system" Pod="calico-kube-controllers-5dcc76bd57-jj9xs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-" Dec 12 17:48:47.433848 containerd[1498]: 2025-12-12 17:48:47.348 [INFO][4285] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" Namespace="calico-system" Pod="calico-kube-controllers-5dcc76bd57-jj9xs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-eth0" Dec 12 17:48:47.433848 containerd[1498]: 2025-12-12 17:48:47.376 [INFO][4301] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" HandleID="k8s-pod-network.3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" Workload="localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-eth0" Dec 12 17:48:47.434315 containerd[1498]: 2025-12-12 17:48:47.376 [INFO][4301] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" HandleID="k8s-pod-network.3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" Workload="localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5dcc76bd57-jj9xs", "timestamp":"2025-12-12 17:48:47.376580205 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:48:47.434315 containerd[1498]: 2025-12-12 17:48:47.376 [INFO][4301] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:48:47.434315 containerd[1498]: 2025-12-12 17:48:47.376 [INFO][4301] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:48:47.434315 containerd[1498]: 2025-12-12 17:48:47.377 [INFO][4301] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:48:47.434315 containerd[1498]: 2025-12-12 17:48:47.388 [INFO][4301] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" host="localhost" Dec 12 17:48:47.434315 containerd[1498]: 2025-12-12 17:48:47.392 [INFO][4301] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:48:47.434315 containerd[1498]: 2025-12-12 17:48:47.396 [INFO][4301] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:48:47.434315 containerd[1498]: 2025-12-12 17:48:47.399 [INFO][4301] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:47.434315 containerd[1498]: 2025-12-12 17:48:47.404 [INFO][4301] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:47.434315 containerd[1498]: 2025-12-12 17:48:47.404 [INFO][4301] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" host="localhost" Dec 12 17:48:47.434796 containerd[1498]: 2025-12-12 17:48:47.406 [INFO][4301] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca Dec 12 17:48:47.434796 containerd[1498]: 2025-12-12 17:48:47.411 [INFO][4301] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" host="localhost" Dec 12 17:48:47.434796 containerd[1498]: 2025-12-12 17:48:47.416 [INFO][4301] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" host="localhost" Dec 12 17:48:47.434796 containerd[1498]: 2025-12-12 17:48:47.416 [INFO][4301] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" host="localhost" Dec 12 17:48:47.434796 containerd[1498]: 2025-12-12 17:48:47.416 [INFO][4301] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:48:47.434796 containerd[1498]: 2025-12-12 17:48:47.416 [INFO][4301] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" HandleID="k8s-pod-network.3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" Workload="localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-eth0" Dec 12 17:48:47.434989 containerd[1498]: 2025-12-12 17:48:47.419 [INFO][4285] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" Namespace="calico-system" Pod="calico-kube-controllers-5dcc76bd57-jj9xs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-eth0", GenerateName:"calico-kube-controllers-5dcc76bd57-", Namespace:"calico-system", SelfLink:"", UID:"6b58ec45-43db-4087-8da6-607a024a3bf6", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dcc76bd57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5dcc76bd57-jj9xs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali71c3d9f75f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:47.435066 containerd[1498]: 2025-12-12 17:48:47.419 [INFO][4285] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" Namespace="calico-system" Pod="calico-kube-controllers-5dcc76bd57-jj9xs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-eth0" Dec 12 17:48:47.435066 containerd[1498]: 2025-12-12 17:48:47.419 [INFO][4285] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71c3d9f75f4 ContainerID="3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" Namespace="calico-system" Pod="calico-kube-controllers-5dcc76bd57-jj9xs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-eth0" Dec 12 17:48:47.435066 containerd[1498]: 2025-12-12 17:48:47.420 [INFO][4285] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" Namespace="calico-system" Pod="calico-kube-controllers-5dcc76bd57-jj9xs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-eth0" Dec 12 17:48:47.435156 containerd[1498]: 2025-12-12 17:48:47.420 [INFO][4285] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" Namespace="calico-system" Pod="calico-kube-controllers-5dcc76bd57-jj9xs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-eth0", GenerateName:"calico-kube-controllers-5dcc76bd57-", Namespace:"calico-system", SelfLink:"", UID:"6b58ec45-43db-4087-8da6-607a024a3bf6", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dcc76bd57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca", Pod:"calico-kube-controllers-5dcc76bd57-jj9xs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali71c3d9f75f4", MAC:"32:87:58:ac:be:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:47.435297 containerd[1498]: 2025-12-12 17:48:47.428 [INFO][4285] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" Namespace="calico-system" Pod="calico-kube-controllers-5dcc76bd57-jj9xs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcc76bd57--jj9xs-eth0" Dec 12 17:48:47.442434 kubelet[2647]: E1212 17:48:47.442382 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pjknm" podUID="d3e06765-0219-4a57-abc4-db29c031f701" Dec 12 17:48:47.473282 containerd[1498]: time="2025-12-12T17:48:47.473184571Z" level=info msg="connecting to shim 3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca" address="unix:///run/containerd/s/2491b2f064bbc058881f64c2dd62f48049bf5e75e9f1aca888154c681d732c57" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:47.505091 systemd[1]: Started cri-containerd-3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca.scope - libcontainer container 3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca. Dec 12 17:48:47.524372 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:48:47.567807 containerd[1498]: time="2025-12-12T17:48:47.567663238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcc76bd57-jj9xs,Uid:6b58ec45-43db-4087-8da6-607a024a3bf6,Namespace:calico-system,Attempt:0,} returns sandbox id \"3575d15c9c582463b0bdc30a881b7988fa29419d881da221e9ca714704fd71ca\"" Dec 12 17:48:47.572153 containerd[1498]: time="2025-12-12T17:48:47.572113717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:48:47.769874 containerd[1498]: time="2025-12-12T17:48:47.769781446Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:48:47.770688 containerd[1498]: time="2025-12-12T17:48:47.770653774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:48:47.770763 containerd[1498]: time="2025-12-12T17:48:47.770732855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:48:47.771208 kubelet[2647]: E1212 17:48:47.770896 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:48:47.771208 kubelet[2647]: E1212 17:48:47.770954 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:48:47.771208 kubelet[2647]: E1212 17:48:47.771130 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8v9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dcc76bd57-jj9xs_calico-system(6b58ec45-43db-4087-8da6-607a024a3bf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:48:47.772597 kubelet[2647]: E1212 17:48:47.772533 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc76bd57-jj9xs" podUID="6b58ec45-43db-4087-8da6-607a024a3bf6" Dec 12 17:48:48.290873 systemd-networkd[1409]: caliafcb8cc8818: Gained IPv6LL Dec 12 17:48:48.308474 containerd[1498]: time="2025-12-12T17:48:48.308238096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f498ff75-dpfb7,Uid:bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:48:48.308474 containerd[1498]: time="2025-12-12T17:48:48.308284256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f498ff75-4h6kh,Uid:1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:48:48.424463 systemd-networkd[1409]: cali655fd1035d0: Link UP Dec 12 17:48:48.425371 systemd-networkd[1409]: cali655fd1035d0: Gained carrier Dec 12 17:48:48.442639 containerd[1498]: 2025-12-12 17:48:48.348 [INFO][4415] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84f498ff75--dpfb7-eth0 calico-apiserver-84f498ff75- calico-apiserver bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8 806 0 2025-12-12 17:48:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84f498ff75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84f498ff75-dpfb7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali655fd1035d0 [] [] }} ContainerID="09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-dpfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--dpfb7-" Dec 12 17:48:48.442639 containerd[1498]: 2025-12-12 17:48:48.348 [INFO][4415] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-dpfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--dpfb7-eth0" Dec 12 17:48:48.442639 containerd[1498]: 2025-12-12 17:48:48.379 [INFO][4443] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" HandleID="k8s-pod-network.09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" Workload="localhost-k8s-calico--apiserver--84f498ff75--dpfb7-eth0" Dec 12 17:48:48.444163 containerd[1498]: 2025-12-12 17:48:48.380 [INFO][4443] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" HandleID="k8s-pod-network.09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" Workload="localhost-k8s-calico--apiserver--84f498ff75--dpfb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400044e230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84f498ff75-dpfb7", "timestamp":"2025-12-12 17:48:48.379485145 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:48:48.444163 containerd[1498]: 2025-12-12 17:48:48.380 [INFO][4443] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:48:48.444163 containerd[1498]: 2025-12-12 17:48:48.380 [INFO][4443] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:48:48.444163 containerd[1498]: 2025-12-12 17:48:48.380 [INFO][4443] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:48:48.444163 containerd[1498]: 2025-12-12 17:48:48.391 [INFO][4443] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" host="localhost" Dec 12 17:48:48.444163 containerd[1498]: 2025-12-12 17:48:48.398 [INFO][4443] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:48:48.444163 containerd[1498]: 2025-12-12 17:48:48.402 [INFO][4443] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:48:48.444163 containerd[1498]: 2025-12-12 17:48:48.404 [INFO][4443] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:48.444163 containerd[1498]: 2025-12-12 17:48:48.406 [INFO][4443] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:48.444163 containerd[1498]: 2025-12-12 17:48:48.406 [INFO][4443] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" host="localhost" Dec 12 17:48:48.444378 containerd[1498]: 2025-12-12 17:48:48.408 [INFO][4443] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4 Dec 12 17:48:48.444378 containerd[1498]: 2025-12-12 17:48:48.412 [INFO][4443] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" host="localhost" Dec 12 17:48:48.444378 containerd[1498]: 2025-12-12 17:48:48.417 [INFO][4443] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" host="localhost" Dec 12 17:48:48.444378 containerd[1498]: 2025-12-12 17:48:48.417 [INFO][4443] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" host="localhost" Dec 12 17:48:48.444378 containerd[1498]: 2025-12-12 17:48:48.417 [INFO][4443] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:48:48.444378 containerd[1498]: 2025-12-12 17:48:48.417 [INFO][4443] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" HandleID="k8s-pod-network.09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" Workload="localhost-k8s-calico--apiserver--84f498ff75--dpfb7-eth0" Dec 12 17:48:48.444488 containerd[1498]: 2025-12-12 17:48:48.422 [INFO][4415] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-dpfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--dpfb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84f498ff75--dpfb7-eth0", GenerateName:"calico-apiserver-84f498ff75-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f498ff75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84f498ff75-dpfb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali655fd1035d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:48.444537 containerd[1498]: 2025-12-12 17:48:48.422 [INFO][4415] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-dpfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--dpfb7-eth0" Dec 12 17:48:48.444537 containerd[1498]: 2025-12-12 17:48:48.422 [INFO][4415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali655fd1035d0 ContainerID="09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-dpfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--dpfb7-eth0" Dec 12 17:48:48.444537 containerd[1498]: 2025-12-12 17:48:48.425 [INFO][4415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-dpfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--dpfb7-eth0" Dec 12 17:48:48.444592 containerd[1498]: 2025-12-12 17:48:48.426 [INFO][4415] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-dpfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--dpfb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84f498ff75--dpfb7-eth0", GenerateName:"calico-apiserver-84f498ff75-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f498ff75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4", Pod:"calico-apiserver-84f498ff75-dpfb7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali655fd1035d0", MAC:"7a:5d:f9:7d:a1:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:48.444654 containerd[1498]: 2025-12-12 17:48:48.438 [INFO][4415] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-dpfb7" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--dpfb7-eth0" Dec 12 17:48:48.447246 kubelet[2647]: E1212 17:48:48.447135 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pjknm" podUID="d3e06765-0219-4a57-abc4-db29c031f701" Dec 12 17:48:48.448432 kubelet[2647]: E1212 17:48:48.448150 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc76bd57-jj9xs" podUID="6b58ec45-43db-4087-8da6-607a024a3bf6" Dec 12 17:48:48.477820 containerd[1498]: time="2025-12-12T17:48:48.477775305Z" level=info msg="connecting to shim 09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4" address="unix:///run/containerd/s/e0c8b843d745de7c5f44cf442ea0e656c315efef7d3cb44106a471e6a6439e26" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:48.499942 systemd[1]: Started cri-containerd-09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4.scope - libcontainer container 09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4. Dec 12 17:48:48.517550 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:48:48.528317 systemd-networkd[1409]: calid51c5ef7f7e: Link UP Dec 12 17:48:48.529190 systemd-networkd[1409]: calid51c5ef7f7e: Gained carrier Dec 12 17:48:48.547401 containerd[1498]: 2025-12-12 17:48:48.361 [INFO][4421] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--84f498ff75--4h6kh-eth0 calico-apiserver-84f498ff75- calico-apiserver 1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7 809 0 2025-12-12 17:48:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84f498ff75 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-84f498ff75-4h6kh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid51c5ef7f7e [] [] }} ContainerID="fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-4h6kh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--4h6kh-" Dec 12 17:48:48.547401 containerd[1498]: 2025-12-12 17:48:48.361 [INFO][4421] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-4h6kh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--4h6kh-eth0" Dec 12 17:48:48.547401 containerd[1498]: 2025-12-12 17:48:48.395 [INFO][4450] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" HandleID="k8s-pod-network.fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" Workload="localhost-k8s-calico--apiserver--84f498ff75--4h6kh-eth0" Dec 12 17:48:48.548113 containerd[1498]: 2025-12-12 17:48:48.395 [INFO][4450] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" HandleID="k8s-pod-network.fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" Workload="localhost-k8s-calico--apiserver--84f498ff75--4h6kh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-84f498ff75-4h6kh", "timestamp":"2025-12-12 17:48:48.395470361 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:48:48.548113 containerd[1498]: 2025-12-12 17:48:48.395 [INFO][4450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:48:48.548113 containerd[1498]: 2025-12-12 17:48:48.418 [INFO][4450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:48:48.548113 containerd[1498]: 2025-12-12 17:48:48.418 [INFO][4450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:48:48.548113 containerd[1498]: 2025-12-12 17:48:48.492 [INFO][4450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" host="localhost" Dec 12 17:48:48.548113 containerd[1498]: 2025-12-12 17:48:48.497 [INFO][4450] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:48:48.548113 containerd[1498]: 2025-12-12 17:48:48.503 [INFO][4450] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:48:48.548113 containerd[1498]: 2025-12-12 17:48:48.505 [INFO][4450] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:48.548113 containerd[1498]: 2025-12-12 17:48:48.508 [INFO][4450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:48.548113 containerd[1498]: 2025-12-12 17:48:48.508 [INFO][4450] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" host="localhost" Dec 12 17:48:48.548339 containerd[1498]: 2025-12-12 17:48:48.510 [INFO][4450] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b Dec 12 17:48:48.548339 containerd[1498]: 2025-12-12 17:48:48.515 [INFO][4450] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" host="localhost" Dec 12 17:48:48.548339 containerd[1498]: 2025-12-12 17:48:48.522 [INFO][4450] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" host="localhost" Dec 12 17:48:48.548339 containerd[1498]: 2025-12-12 17:48:48.522 [INFO][4450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" host="localhost" Dec 12 17:48:48.548339 containerd[1498]: 2025-12-12 17:48:48.522 [INFO][4450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:48:48.548339 containerd[1498]: 2025-12-12 17:48:48.522 [INFO][4450] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" HandleID="k8s-pod-network.fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" Workload="localhost-k8s-calico--apiserver--84f498ff75--4h6kh-eth0" Dec 12 17:48:48.548463 containerd[1498]: 2025-12-12 17:48:48.524 [INFO][4421] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-4h6kh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--4h6kh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84f498ff75--4h6kh-eth0", GenerateName:"calico-apiserver-84f498ff75-", Namespace:"calico-apiserver", SelfLink:"", UID:"1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f498ff75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-84f498ff75-4h6kh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid51c5ef7f7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:48.548521 containerd[1498]: 2025-12-12 17:48:48.524 [INFO][4421] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-4h6kh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--4h6kh-eth0" Dec 12 17:48:48.548521 containerd[1498]: 2025-12-12 17:48:48.524 [INFO][4421] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid51c5ef7f7e ContainerID="fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-4h6kh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--4h6kh-eth0" Dec 12 17:48:48.548521 containerd[1498]: 2025-12-12 17:48:48.528 [INFO][4421] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-4h6kh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--4h6kh-eth0" Dec 12 17:48:48.548582 containerd[1498]: 2025-12-12 17:48:48.528 [INFO][4421] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-4h6kh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--4h6kh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--84f498ff75--4h6kh-eth0", GenerateName:"calico-apiserver-84f498ff75-", Namespace:"calico-apiserver", SelfLink:"", UID:"1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84f498ff75", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b", Pod:"calico-apiserver-84f498ff75-4h6kh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid51c5ef7f7e", MAC:"fe:2a:4f:d5:48:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:48.548631 containerd[1498]: 2025-12-12 17:48:48.543 [INFO][4421] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" Namespace="calico-apiserver" Pod="calico-apiserver-84f498ff75-4h6kh" WorkloadEndpoint="localhost-k8s-calico--apiserver--84f498ff75--4h6kh-eth0" Dec 12 17:48:48.561146 containerd[1498]: time="2025-12-12T17:48:48.560988376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f498ff75-dpfb7,Uid:bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"09137aca61fbe8620f6feb61620be6f56c746a51ddffc9a5634b47d318a313a4\"" Dec 12 17:48:48.564343 containerd[1498]: time="2025-12-12T17:48:48.563371916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:48:48.573943 containerd[1498]: time="2025-12-12T17:48:48.573899206Z" level=info msg="connecting to shim fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b" address="unix:///run/containerd/s/a94fb813ee35c09b8b1b172d32e93e6934d5c8d0e1c18fc3d5755b74cf2b2699" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:48.597886 systemd[1]: Started cri-containerd-fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b.scope - libcontainer container fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b. Dec 12 17:48:48.608941 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:48:48.632728 containerd[1498]: time="2025-12-12T17:48:48.632665468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84f498ff75-4h6kh,Uid:1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fcc581f308df138f570a643ce031fecf05c8d1897bf0db01481f8eb7d31f567b\"" Dec 12 17:48:48.674894 systemd-networkd[1409]: cali71c3d9f75f4: Gained IPv6LL Dec 12 17:48:48.752627 containerd[1498]: time="2025-12-12T17:48:48.752583133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:48:48.753661 containerd[1498]: time="2025-12-12T17:48:48.753599582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:48:48.753747 containerd[1498]: time="2025-12-12T17:48:48.753684502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:48:48.754363 kubelet[2647]: E1212 17:48:48.753947 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:48:48.754363 kubelet[2647]: E1212 17:48:48.753990 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:48:48.754363 kubelet[2647]: E1212 17:48:48.754213 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hxbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84f498ff75-dpfb7_calico-apiserver(bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:48:48.754532 containerd[1498]: time="2025-12-12T17:48:48.754482949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:48:48.755648 kubelet[2647]: E1212 17:48:48.755590 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f498ff75-dpfb7" podUID="bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8" Dec 12 17:48:48.990271 containerd[1498]: time="2025-12-12T17:48:48.990213404Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:48:48.991301 containerd[1498]: time="2025-12-12T17:48:48.991262453Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:48:48.991427 containerd[1498]: time="2025-12-12T17:48:48.991359173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:48:48.991555 kubelet[2647]: E1212 17:48:48.991505 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:48:48.991877 kubelet[2647]: E1212 17:48:48.991576 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:48:48.991877 kubelet[2647]: E1212 17:48:48.991764 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhh7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84f498ff75-4h6kh_calico-apiserver(1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:48:48.993066 kubelet[2647]: E1212 17:48:48.992972 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f498ff75-4h6kh" podUID="1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7" Dec 12 17:48:49.308334 containerd[1498]: time="2025-12-12T17:48:49.308170581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nrs7m,Uid:9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe,Namespace:calico-system,Attempt:0,}" Dec 12 17:48:49.414928 systemd-networkd[1409]: calie890af9a749: Link UP Dec 12 17:48:49.415890 systemd-networkd[1409]: calie890af9a749: Gained carrier Dec 12 17:48:49.434125 containerd[1498]: 2025-12-12 17:48:49.354 [INFO][4569] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--nrs7m-eth0 goldmane-666569f655- calico-system 9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe 810 0 2025-12-12 17:48:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-nrs7m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie890af9a749 [] [] }} ContainerID="e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" Namespace="calico-system" Pod="goldmane-666569f655-nrs7m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrs7m-" Dec 12 17:48:49.434125 containerd[1498]: 2025-12-12 17:48:49.354 [INFO][4569] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" Namespace="calico-system" Pod="goldmane-666569f655-nrs7m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrs7m-eth0" Dec 12 17:48:49.434125 containerd[1498]: 2025-12-12 17:48:49.376 [INFO][4586] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" HandleID="k8s-pod-network.e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" Workload="localhost-k8s-goldmane--666569f655--nrs7m-eth0" Dec 12 17:48:49.434597 containerd[1498]: 2025-12-12 17:48:49.377 [INFO][4586] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" HandleID="k8s-pod-network.e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" Workload="localhost-k8s-goldmane--666569f655--nrs7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004310c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-nrs7m", "timestamp":"2025-12-12 17:48:49.376875635 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:48:49.434597 containerd[1498]: 2025-12-12 17:48:49.377 [INFO][4586] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:48:49.434597 containerd[1498]: 2025-12-12 17:48:49.377 [INFO][4586] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:48:49.434597 containerd[1498]: 2025-12-12 17:48:49.377 [INFO][4586] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:48:49.434597 containerd[1498]: 2025-12-12 17:48:49.386 [INFO][4586] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" host="localhost" Dec 12 17:48:49.434597 containerd[1498]: 2025-12-12 17:48:49.392 [INFO][4586] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:48:49.434597 containerd[1498]: 2025-12-12 17:48:49.396 [INFO][4586] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:48:49.434597 containerd[1498]: 2025-12-12 17:48:49.398 [INFO][4586] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:49.434597 containerd[1498]: 2025-12-12 17:48:49.400 [INFO][4586] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:49.434597 containerd[1498]: 2025-12-12 17:48:49.400 [INFO][4586] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" host="localhost" Dec 12 17:48:49.434828 containerd[1498]: 2025-12-12 17:48:49.401 [INFO][4586] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a Dec 12 17:48:49.434828 containerd[1498]: 2025-12-12 17:48:49.405 [INFO][4586] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" host="localhost" Dec 12 17:48:49.434828 containerd[1498]: 2025-12-12 17:48:49.410 [INFO][4586] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" host="localhost" Dec 12 17:48:49.434828 containerd[1498]: 2025-12-12 17:48:49.410 [INFO][4586] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" host="localhost" Dec 12 17:48:49.434828 containerd[1498]: 2025-12-12 17:48:49.410 [INFO][4586] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:48:49.434828 containerd[1498]: 2025-12-12 17:48:49.410 [INFO][4586] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" HandleID="k8s-pod-network.e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" Workload="localhost-k8s-goldmane--666569f655--nrs7m-eth0" Dec 12 17:48:49.434940 containerd[1498]: 2025-12-12 17:48:49.412 [INFO][4569] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" Namespace="calico-system" Pod="goldmane-666569f655-nrs7m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrs7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nrs7m-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-nrs7m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie890af9a749", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:49.434940 containerd[1498]: 2025-12-12 17:48:49.413 [INFO][4569] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" Namespace="calico-system" Pod="goldmane-666569f655-nrs7m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrs7m-eth0" Dec 12 17:48:49.435008 containerd[1498]: 2025-12-12 17:48:49.413 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie890af9a749 ContainerID="e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" Namespace="calico-system" Pod="goldmane-666569f655-nrs7m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrs7m-eth0" Dec 12 17:48:49.435008 containerd[1498]: 2025-12-12 17:48:49.416 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" Namespace="calico-system" Pod="goldmane-666569f655-nrs7m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrs7m-eth0" Dec 12 17:48:49.435047 containerd[1498]: 2025-12-12 17:48:49.417 [INFO][4569] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" Namespace="calico-system" Pod="goldmane-666569f655-nrs7m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrs7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--nrs7m-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a", Pod:"goldmane-666569f655-nrs7m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie890af9a749", MAC:"8a:af:5e:15:d0:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:49.435094 containerd[1498]: 2025-12-12 17:48:49.430 [INFO][4569] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" Namespace="calico-system" Pod="goldmane-666569f655-nrs7m" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--nrs7m-eth0" Dec 12 17:48:49.452298 kubelet[2647]: E1212 17:48:49.452205 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc76bd57-jj9xs" podUID="6b58ec45-43db-4087-8da6-607a024a3bf6" Dec 12 17:48:49.452298 kubelet[2647]: E1212 17:48:49.452255 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f498ff75-dpfb7" podUID="bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8" Dec 12 17:48:49.454317 kubelet[2647]: E1212 17:48:49.452854 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f498ff75-4h6kh" podUID="1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7" Dec 12 17:48:49.478303 containerd[1498]: time="2025-12-12T17:48:49.478258002Z" level=info msg="connecting to shim e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a" address="unix:///run/containerd/s/1215026b8d059a9313de6924859045b067945bf27cab5ace95ba7a58540c9c6e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:49.510947 systemd[1]: Started cri-containerd-e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a.scope - libcontainer container e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a. Dec 12 17:48:49.527896 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:48:49.552942 containerd[1498]: time="2025-12-12T17:48:49.552896025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nrs7m,Uid:9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe,Namespace:calico-system,Attempt:0,} returns sandbox id \"e17c9496f8f4d689495b7607c334f2c5aff1da7bc00a8e2f856fd10d1f66bc1a\"" Dec 12 17:48:49.554424 containerd[1498]: time="2025-12-12T17:48:49.554395678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:48:49.699112 systemd-networkd[1409]: cali655fd1035d0: Gained IPv6LL Dec 12 17:48:49.699756 systemd-networkd[1409]: calid51c5ef7f7e: Gained IPv6LL Dec 12 17:48:49.747635 containerd[1498]: time="2025-12-12T17:48:49.747578051Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:48:49.749125 containerd[1498]: time="2025-12-12T17:48:49.749078984Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:48:49.749305 containerd[1498]: time="2025-12-12T17:48:49.749107304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:48:49.749371 kubelet[2647]: E1212 17:48:49.749320 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:48:49.749416 kubelet[2647]: E1212 17:48:49.749382 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:48:49.749952 kubelet[2647]: E1212 17:48:49.749530 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qsvqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nrs7m_calico-system(9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:48:49.750800 kubelet[2647]: E1212 17:48:49.750756 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nrs7m" podUID="9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe" Dec 12 17:48:50.309868 containerd[1498]: time="2025-12-12T17:48:50.309817372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5mf4q,Uid:e923402f-6f38-4464-85a3-ba4ab295e18c,Namespace:kube-system,Attempt:0,}" Dec 12 17:48:50.309868 containerd[1498]: time="2025-12-12T17:48:50.309817492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c6mwg,Uid:1734759f-91db-4792-a5b1-49dc654f13fd,Namespace:kube-system,Attempt:0,}" Dec 12 17:48:50.424892 systemd-networkd[1409]: cali536cc3d851c: Link UP Dec 12 17:48:50.425544 systemd-networkd[1409]: cali536cc3d851c: Gained carrier Dec 12 17:48:50.443139 containerd[1498]: 2025-12-12 17:48:50.352 [INFO][4659] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--c6mwg-eth0 coredns-674b8bbfcf- kube-system 1734759f-91db-4792-a5b1-49dc654f13fd 799 0 2025-12-12 17:48:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-c6mwg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali536cc3d851c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" Namespace="kube-system" Pod="coredns-674b8bbfcf-c6mwg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c6mwg-" Dec 12 17:48:50.443139 containerd[1498]: 2025-12-12 17:48:50.352 [INFO][4659] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" Namespace="kube-system" Pod="coredns-674b8bbfcf-c6mwg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c6mwg-eth0" Dec 12 17:48:50.443139 containerd[1498]: 2025-12-12 17:48:50.381 [INFO][4691] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" HandleID="k8s-pod-network.08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" Workload="localhost-k8s-coredns--674b8bbfcf--c6mwg-eth0" Dec 12 17:48:50.443346 containerd[1498]: 2025-12-12 17:48:50.382 [INFO][4691] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" HandleID="k8s-pod-network.08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" Workload="localhost-k8s-coredns--674b8bbfcf--c6mwg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c30d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-c6mwg", "timestamp":"2025-12-12 17:48:50.381884961 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:48:50.443346 containerd[1498]: 2025-12-12 17:48:50.382 [INFO][4691] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:48:50.443346 containerd[1498]: 2025-12-12 17:48:50.382 [INFO][4691] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:48:50.443346 containerd[1498]: 2025-12-12 17:48:50.382 [INFO][4691] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:48:50.443346 containerd[1498]: 2025-12-12 17:48:50.393 [INFO][4691] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" host="localhost" Dec 12 17:48:50.443346 containerd[1498]: 2025-12-12 17:48:50.398 [INFO][4691] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:48:50.443346 containerd[1498]: 2025-12-12 17:48:50.404 [INFO][4691] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:48:50.443346 containerd[1498]: 2025-12-12 17:48:50.405 [INFO][4691] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:50.443346 containerd[1498]: 2025-12-12 17:48:50.408 [INFO][4691] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:50.443346 containerd[1498]: 2025-12-12 17:48:50.408 [INFO][4691] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" host="localhost" Dec 12 17:48:50.443691 containerd[1498]: 2025-12-12 17:48:50.409 [INFO][4691] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd Dec 12 17:48:50.443691 containerd[1498]: 2025-12-12 17:48:50.413 [INFO][4691] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" host="localhost" Dec 12 17:48:50.443691 containerd[1498]: 2025-12-12 17:48:50.420 [INFO][4691] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" host="localhost" Dec 12 17:48:50.443691 containerd[1498]: 2025-12-12 17:48:50.420 [INFO][4691] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" host="localhost" Dec 12 17:48:50.443691 containerd[1498]: 2025-12-12 17:48:50.420 [INFO][4691] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:48:50.443691 containerd[1498]: 2025-12-12 17:48:50.420 [INFO][4691] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" HandleID="k8s-pod-network.08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" Workload="localhost-k8s-coredns--674b8bbfcf--c6mwg-eth0" Dec 12 17:48:50.443883 containerd[1498]: 2025-12-12 17:48:50.422 [INFO][4659] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" Namespace="kube-system" Pod="coredns-674b8bbfcf-c6mwg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c6mwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--c6mwg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1734759f-91db-4792-a5b1-49dc654f13fd", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-c6mwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali536cc3d851c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:50.443993 containerd[1498]: 2025-12-12 17:48:50.422 [INFO][4659] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" Namespace="kube-system" Pod="coredns-674b8bbfcf-c6mwg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c6mwg-eth0" Dec 12 17:48:50.443993 containerd[1498]: 2025-12-12 17:48:50.422 [INFO][4659] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali536cc3d851c ContainerID="08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" Namespace="kube-system" Pod="coredns-674b8bbfcf-c6mwg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c6mwg-eth0" Dec 12 17:48:50.443993 containerd[1498]: 2025-12-12 17:48:50.425 [INFO][4659] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" Namespace="kube-system" Pod="coredns-674b8bbfcf-c6mwg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c6mwg-eth0" Dec 12 17:48:50.444064 containerd[1498]: 2025-12-12 17:48:50.426 [INFO][4659] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" Namespace="kube-system" Pod="coredns-674b8bbfcf-c6mwg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c6mwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--c6mwg-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1734759f-91db-4792-a5b1-49dc654f13fd", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd", Pod:"coredns-674b8bbfcf-c6mwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali536cc3d851c", MAC:"22:2b:a3:89:44:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:50.444064 containerd[1498]: 2025-12-12 17:48:50.439 [INFO][4659] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" Namespace="kube-system" Pod="coredns-674b8bbfcf-c6mwg" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c6mwg-eth0" Dec 12 17:48:50.456067 kubelet[2647]: E1212 17:48:50.455849 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nrs7m" podUID="9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe" Dec 12 17:48:50.456417 kubelet[2647]: E1212 17:48:50.456285 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f498ff75-dpfb7" podUID="bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8" Dec 12 17:48:50.457567 kubelet[2647]: E1212 17:48:50.457486 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f498ff75-4h6kh" podUID="1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7" Dec 12 17:48:50.472598 containerd[1498]: time="2025-12-12T17:48:50.472551822Z" level=info msg="connecting to shim 08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd" address="unix:///run/containerd/s/9706972ff3a08c9619e22b7714950ce3acdbcebffc79454d6528e141d791dd9b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:50.505996 systemd[1]: Started cri-containerd-08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd.scope - libcontainer container 08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd. Dec 12 17:48:50.530968 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:48:50.535158 systemd-networkd[1409]: caliad54e92b950: Link UP Dec 12 17:48:50.537904 systemd-networkd[1409]: caliad54e92b950: Gained carrier Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.366 [INFO][4665] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--5mf4q-eth0 coredns-674b8bbfcf- kube-system e923402f-6f38-4464-85a3-ba4ab295e18c 807 0 2025-12-12 17:48:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-5mf4q eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad54e92b950 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" Namespace="kube-system" Pod="coredns-674b8bbfcf-5mf4q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5mf4q-" Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.366 [INFO][4665] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" Namespace="kube-system" Pod="coredns-674b8bbfcf-5mf4q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5mf4q-eth0" Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.392 [INFO][4698] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" HandleID="k8s-pod-network.ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" Workload="localhost-k8s-coredns--674b8bbfcf--5mf4q-eth0" Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.392 [INFO][4698] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" HandleID="k8s-pod-network.ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" Workload="localhost-k8s-coredns--674b8bbfcf--5mf4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400043a610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-5mf4q", "timestamp":"2025-12-12 17:48:50.392130484 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.392 [INFO][4698] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.420 [INFO][4698] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.420 [INFO][4698] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.494 [INFO][4698] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" host="localhost" Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.499 [INFO][4698] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.506 [INFO][4698] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.509 [INFO][4698] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.512 [INFO][4698] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.512 [INFO][4698] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" host="localhost" Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.514 [INFO][4698] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38 Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.521 [INFO][4698] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" host="localhost" Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.528 [INFO][4698] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" host="localhost" Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.528 [INFO][4698] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" host="localhost" Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.528 [INFO][4698] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:48:50.562820 containerd[1498]: 2025-12-12 17:48:50.529 [INFO][4698] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" HandleID="k8s-pod-network.ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" Workload="localhost-k8s-coredns--674b8bbfcf--5mf4q-eth0" Dec 12 17:48:50.564288 containerd[1498]: 2025-12-12 17:48:50.532 [INFO][4665] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" Namespace="kube-system" Pod="coredns-674b8bbfcf-5mf4q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5mf4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--5mf4q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e923402f-6f38-4464-85a3-ba4ab295e18c", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-5mf4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad54e92b950", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:50.564288 containerd[1498]: 2025-12-12 17:48:50.532 [INFO][4665] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" Namespace="kube-system" Pod="coredns-674b8bbfcf-5mf4q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5mf4q-eth0" Dec 12 17:48:50.564288 containerd[1498]: 2025-12-12 17:48:50.532 [INFO][4665] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad54e92b950 ContainerID="ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" Namespace="kube-system" Pod="coredns-674b8bbfcf-5mf4q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5mf4q-eth0" Dec 12 17:48:50.564288 containerd[1498]: 2025-12-12 17:48:50.540 [INFO][4665] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" Namespace="kube-system" Pod="coredns-674b8bbfcf-5mf4q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5mf4q-eth0" Dec 12 17:48:50.564288 containerd[1498]: 2025-12-12 17:48:50.541 [INFO][4665] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" Namespace="kube-system" Pod="coredns-674b8bbfcf-5mf4q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5mf4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--5mf4q-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e923402f-6f38-4464-85a3-ba4ab295e18c", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 48, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38", Pod:"coredns-674b8bbfcf-5mf4q", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad54e92b950", MAC:"1a:4c:10:2d:6e:98", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:48:50.564288 containerd[1498]: 2025-12-12 17:48:50.558 [INFO][4665] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" Namespace="kube-system" Pod="coredns-674b8bbfcf-5mf4q" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5mf4q-eth0" Dec 12 17:48:50.577720 containerd[1498]: time="2025-12-12T17:48:50.577650320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c6mwg,Uid:1734759f-91db-4792-a5b1-49dc654f13fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd\"" Dec 12 17:48:50.590731 containerd[1498]: time="2025-12-12T17:48:50.590681587Z" level=info msg="CreateContainer within sandbox \"08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:48:50.599446 containerd[1498]: time="2025-12-12T17:48:50.599399138Z" level=info msg="connecting to shim ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38" address="unix:///run/containerd/s/a4659c3cc336e7c0142e14a0bad91d45ac1e5d5efa624855ec4cfe702a7356bf" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:48:50.605598 containerd[1498]: time="2025-12-12T17:48:50.605469988Z" level=info msg="Container 28c9b2e62b84da985569149f310b8cb24ff822428ca9ac92aa9888b12560ba56: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:48:50.625267 containerd[1498]: time="2025-12-12T17:48:50.625226389Z" level=info msg="CreateContainer within sandbox \"08d61cb81f1a619c5d44b2bce6648af0fbd06598c9f9e531cada6c348cc66efd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"28c9b2e62b84da985569149f310b8cb24ff822428ca9ac92aa9888b12560ba56\"" Dec 12 17:48:50.626413 containerd[1498]: time="2025-12-12T17:48:50.626376278Z" level=info msg="StartContainer for \"28c9b2e62b84da985569149f310b8cb24ff822428ca9ac92aa9888b12560ba56\"" Dec 12 17:48:50.631061 containerd[1498]: time="2025-12-12T17:48:50.630341711Z" level=info msg="connecting to shim 28c9b2e62b84da985569149f310b8cb24ff822428ca9ac92aa9888b12560ba56" address="unix:///run/containerd/s/9706972ff3a08c9619e22b7714950ce3acdbcebffc79454d6528e141d791dd9b" protocol=ttrpc version=3 Dec 12 17:48:50.640941 systemd[1]: Started cri-containerd-ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38.scope - libcontainer container ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38. Dec 12 17:48:50.660893 systemd[1]: Started cri-containerd-28c9b2e62b84da985569149f310b8cb24ff822428ca9ac92aa9888b12560ba56.scope - libcontainer container 28c9b2e62b84da985569149f310b8cb24ff822428ca9ac92aa9888b12560ba56. Dec 12 17:48:50.686242 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:48:50.704431 containerd[1498]: time="2025-12-12T17:48:50.704393276Z" level=info msg="StartContainer for \"28c9b2e62b84da985569149f310b8cb24ff822428ca9ac92aa9888b12560ba56\" returns successfully" Dec 12 17:48:50.713967 containerd[1498]: time="2025-12-12T17:48:50.713913994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5mf4q,Uid:e923402f-6f38-4464-85a3-ba4ab295e18c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38\"" Dec 12 17:48:50.721639 containerd[1498]: time="2025-12-12T17:48:50.721180213Z" level=info msg="CreateContainer within sandbox \"ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:48:50.730331 containerd[1498]: time="2025-12-12T17:48:50.730296648Z" level=info msg="Container d01ab6eb6a2108c2a73d8e005b3790a20ba985ec9fa78a4b1dab69a879c89fe0: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:48:50.738594 containerd[1498]: time="2025-12-12T17:48:50.738550035Z" level=info msg="CreateContainer within sandbox \"ff7a05e8474fbe952cd56c84c53ac1ddcdc3722bec8b1de625da49b1a31a2a38\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d01ab6eb6a2108c2a73d8e005b3790a20ba985ec9fa78a4b1dab69a879c89fe0\"" Dec 12 17:48:50.740732 containerd[1498]: time="2025-12-12T17:48:50.739053999Z" level=info msg="StartContainer for \"d01ab6eb6a2108c2a73d8e005b3790a20ba985ec9fa78a4b1dab69a879c89fe0\"" Dec 12 17:48:50.742830 containerd[1498]: time="2025-12-12T17:48:50.742781830Z" level=info msg="connecting to shim d01ab6eb6a2108c2a73d8e005b3790a20ba985ec9fa78a4b1dab69a879c89fe0" address="unix:///run/containerd/s/a4659c3cc336e7c0142e14a0bad91d45ac1e5d5efa624855ec4cfe702a7356bf" protocol=ttrpc version=3 Dec 12 17:48:50.771252 systemd[1]: Started cri-containerd-d01ab6eb6a2108c2a73d8e005b3790a20ba985ec9fa78a4b1dab69a879c89fe0.scope - libcontainer container d01ab6eb6a2108c2a73d8e005b3790a20ba985ec9fa78a4b1dab69a879c89fe0. Dec 12 17:48:50.811578 containerd[1498]: time="2025-12-12T17:48:50.811535671Z" level=info msg="StartContainer for \"d01ab6eb6a2108c2a73d8e005b3790a20ba985ec9fa78a4b1dab69a879c89fe0\" returns successfully" Dec 12 17:48:51.426920 systemd-networkd[1409]: calie890af9a749: Gained IPv6LL Dec 12 17:48:51.462929 kubelet[2647]: E1212 17:48:51.462891 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nrs7m" podUID="9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe" Dec 12 17:48:51.473637 kubelet[2647]: I1212 17:48:51.473043 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5mf4q" podStartSLOduration=38.473019356 podStartE2EDuration="38.473019356s" podCreationTimestamp="2025-12-12 17:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:48:51.470555056 +0000 UTC m=+45.256137270" watchObservedRunningTime="2025-12-12 17:48:51.473019356 +0000 UTC m=+45.258601570" Dec 12 17:48:51.483215 kubelet[2647]: I1212 17:48:51.482418 2647 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-c6mwg" podStartSLOduration=38.482400991 podStartE2EDuration="38.482400991s" podCreationTimestamp="2025-12-12 17:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:48:51.481172301 +0000 UTC m=+45.266754515" watchObservedRunningTime="2025-12-12 17:48:51.482400991 +0000 UTC m=+45.267983205" Dec 12 17:48:51.559256 systemd[1]: Started sshd@8-10.0.0.131:22-10.0.0.1:39290.service - OpenSSH per-connection server daemon (10.0.0.1:39290). Dec 12 17:48:51.626904 sshd[4900]: Accepted publickey for core from 10.0.0.1 port 39290 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:48:51.628271 sshd-session[4900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:48:51.632679 systemd-logind[1476]: New session 9 of user core. Dec 12 17:48:51.639853 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:48:51.841148 sshd[4903]: Connection closed by 10.0.0.1 port 39290 Dec 12 17:48:51.841484 sshd-session[4900]: pam_unix(sshd:session): session closed for user core Dec 12 17:48:51.845020 systemd[1]: sshd@8-10.0.0.131:22-10.0.0.1:39290.service: Deactivated successfully. Dec 12 17:48:51.846976 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:48:51.848439 systemd-logind[1476]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:48:51.849261 systemd-logind[1476]: Removed session 9. Dec 12 17:48:52.004017 systemd-networkd[1409]: caliad54e92b950: Gained IPv6LL Dec 12 17:48:52.132597 systemd-networkd[1409]: cali536cc3d851c: Gained IPv6LL Dec 12 17:48:56.310266 containerd[1498]: time="2025-12-12T17:48:56.310224544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:48:56.501632 containerd[1498]: time="2025-12-12T17:48:56.501593741Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:48:56.515201 containerd[1498]: time="2025-12-12T17:48:56.515144880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:48:56.515306 containerd[1498]: time="2025-12-12T17:48:56.515224200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:48:56.515367 kubelet[2647]: E1212 17:48:56.515332 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:48:56.515616 kubelet[2647]: E1212 17:48:56.515376 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:48:56.515616 kubelet[2647]: E1212 17:48:56.515504 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:e411dfaba746472f812751b5f961c7ad,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p2b4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775b66bf7b-w8t5k_calico-system(89fc1364-299e-46d0-8995-62a57f819657): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:48:56.518273 containerd[1498]: time="2025-12-12T17:48:56.518246742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:48:56.765952 containerd[1498]: time="2025-12-12T17:48:56.765905190Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:48:56.766801 containerd[1498]: time="2025-12-12T17:48:56.766767716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:48:56.766889 containerd[1498]: time="2025-12-12T17:48:56.766836756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:48:56.767416 kubelet[2647]: E1212 17:48:56.766997 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:48:56.767416 kubelet[2647]: E1212 17:48:56.767076 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:48:56.767538 kubelet[2647]: E1212 17:48:56.767413 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2b4h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-775b66bf7b-w8t5k_calico-system(89fc1364-299e-46d0-8995-62a57f819657): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:48:56.769239 kubelet[2647]: E1212 17:48:56.768644 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775b66bf7b-w8t5k" podUID="89fc1364-299e-46d0-8995-62a57f819657" Dec 12 17:48:56.860606 systemd[1]: Started sshd@9-10.0.0.131:22-10.0.0.1:39304.service - OpenSSH per-connection server daemon (10.0.0.1:39304). Dec 12 17:48:56.925520 sshd[4932]: Accepted publickey for core from 10.0.0.1 port 39304 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:48:56.926805 sshd-session[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:48:56.930622 systemd-logind[1476]: New session 10 of user core. Dec 12 17:48:56.944966 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:48:57.089686 sshd[4936]: Connection closed by 10.0.0.1 port 39304 Dec 12 17:48:57.090122 sshd-session[4932]: pam_unix(sshd:session): session closed for user core Dec 12 17:48:57.101542 systemd[1]: sshd@9-10.0.0.131:22-10.0.0.1:39304.service: Deactivated successfully. Dec 12 17:48:57.103080 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:48:57.103834 systemd-logind[1476]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:48:57.105702 systemd[1]: Started sshd@10-10.0.0.131:22-10.0.0.1:39308.service - OpenSSH per-connection server daemon (10.0.0.1:39308). Dec 12 17:48:57.106966 systemd-logind[1476]: Removed session 10. Dec 12 17:48:57.161918 sshd[4950]: Accepted publickey for core from 10.0.0.1 port 39308 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:48:57.163256 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:48:57.168782 systemd-logind[1476]: New session 11 of user core. Dec 12 17:48:57.181863 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:48:57.358677 sshd[4953]: Connection closed by 10.0.0.1 port 39308 Dec 12 17:48:57.359371 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Dec 12 17:48:57.372671 systemd[1]: sshd@10-10.0.0.131:22-10.0.0.1:39308.service: Deactivated successfully. Dec 12 17:48:57.375700 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:48:57.377161 systemd-logind[1476]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:48:57.381839 systemd[1]: Started sshd@11-10.0.0.131:22-10.0.0.1:39318.service - OpenSSH per-connection server daemon (10.0.0.1:39318). Dec 12 17:48:57.382895 systemd-logind[1476]: Removed session 11. Dec 12 17:48:57.443038 sshd[4964]: Accepted publickey for core from 10.0.0.1 port 39318 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:48:57.444205 sshd-session[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:48:57.448034 systemd-logind[1476]: New session 12 of user core. Dec 12 17:48:57.457874 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:48:57.588766 sshd[4967]: Connection closed by 10.0.0.1 port 39318 Dec 12 17:48:57.588748 sshd-session[4964]: pam_unix(sshd:session): session closed for user core Dec 12 17:48:57.592067 systemd[1]: sshd@11-10.0.0.131:22-10.0.0.1:39318.service: Deactivated successfully. Dec 12 17:48:57.593820 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:48:57.594475 systemd-logind[1476]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:48:57.595376 systemd-logind[1476]: Removed session 12. Dec 12 17:48:59.309184 containerd[1498]: time="2025-12-12T17:48:59.309143911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:48:59.541736 containerd[1498]: time="2025-12-12T17:48:59.541682972Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:48:59.542802 containerd[1498]: time="2025-12-12T17:48:59.542657219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:48:59.543073 kubelet[2647]: E1212 17:48:59.543027 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:48:59.543423 kubelet[2647]: E1212 17:48:59.543084 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:48:59.543423 kubelet[2647]: E1212 17:48:59.543203 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xb4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pjknm_calico-system(d3e06765-0219-4a57-abc4-db29c031f701): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:48:59.547385 containerd[1498]: time="2025-12-12T17:48:59.547277491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:48:59.549321 containerd[1498]: time="2025-12-12T17:48:59.548518460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:48:59.764597 containerd[1498]: time="2025-12-12T17:48:59.764526166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:48:59.765802 containerd[1498]: time="2025-12-12T17:48:59.765703134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:48:59.765802 containerd[1498]: time="2025-12-12T17:48:59.765772975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:48:59.765971 kubelet[2647]: E1212 17:48:59.765926 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:48:59.766015 kubelet[2647]: E1212 17:48:59.765982 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:48:59.766132 kubelet[2647]: E1212 17:48:59.766091 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xb4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-pjknm_calico-system(d3e06765-0219-4a57-abc4-db29c031f701): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:48:59.767436 kubelet[2647]: E1212 17:48:59.767353 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pjknm" podUID="d3e06765-0219-4a57-abc4-db29c031f701" Dec 12 17:49:00.309856 containerd[1498]: time="2025-12-12T17:49:00.309764779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:49:00.501164 containerd[1498]: time="2025-12-12T17:49:00.501110735Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:49:00.502160 containerd[1498]: time="2025-12-12T17:49:00.502124742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:49:00.502234 containerd[1498]: time="2025-12-12T17:49:00.502161622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:49:00.502491 kubelet[2647]: E1212 17:49:00.502452 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:49:00.502539 kubelet[2647]: E1212 17:49:00.502505 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:49:00.502761 kubelet[2647]: E1212 17:49:00.502669 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8v9d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dcc76bd57-jj9xs_calico-system(6b58ec45-43db-4087-8da6-607a024a3bf6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:49:00.504177 kubelet[2647]: E1212 17:49:00.504103 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc76bd57-jj9xs" podUID="6b58ec45-43db-4087-8da6-607a024a3bf6" Dec 12 17:49:02.308848 containerd[1498]: time="2025-12-12T17:49:02.308750505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:49:02.500565 containerd[1498]: time="2025-12-12T17:49:02.500505151Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:49:02.501549 containerd[1498]: time="2025-12-12T17:49:02.501491678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:49:02.501601 containerd[1498]: time="2025-12-12T17:49:02.501552758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:49:02.501750 kubelet[2647]: E1212 17:49:02.501701 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:49:02.502030 kubelet[2647]: E1212 17:49:02.501762 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:49:02.502030 kubelet[2647]: E1212 17:49:02.501889 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhh7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84f498ff75-4h6kh_calico-apiserver(1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:49:02.503589 kubelet[2647]: E1212 17:49:02.503518 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f498ff75-4h6kh" podUID="1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7" Dec 12 17:49:02.605478 systemd[1]: Started sshd@12-10.0.0.131:22-10.0.0.1:57202.service - OpenSSH per-connection server daemon (10.0.0.1:57202). Dec 12 17:49:02.658401 sshd[4982]: Accepted publickey for core from 10.0.0.1 port 57202 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:49:02.659839 sshd-session[4982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:49:02.663663 systemd-logind[1476]: New session 13 of user core. Dec 12 17:49:02.669922 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:49:02.798190 sshd[4985]: Connection closed by 10.0.0.1 port 57202 Dec 12 17:49:02.798551 sshd-session[4982]: pam_unix(sshd:session): session closed for user core Dec 12 17:49:02.808156 systemd[1]: sshd@12-10.0.0.131:22-10.0.0.1:57202.service: Deactivated successfully. Dec 12 17:49:02.810002 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:49:02.810666 systemd-logind[1476]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:49:02.812338 systemd-logind[1476]: Removed session 13. Dec 12 17:49:02.813533 systemd[1]: Started sshd@13-10.0.0.131:22-10.0.0.1:57208.service - OpenSSH per-connection server daemon (10.0.0.1:57208). Dec 12 17:49:02.864904 sshd[4998]: Accepted publickey for core from 10.0.0.1 port 57208 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:49:02.866121 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:49:02.870528 systemd-logind[1476]: New session 14 of user core. Dec 12 17:49:02.880899 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:49:03.100413 sshd[5001]: Connection closed by 10.0.0.1 port 57208 Dec 12 17:49:03.100791 sshd-session[4998]: pam_unix(sshd:session): session closed for user core Dec 12 17:49:03.115168 systemd[1]: sshd@13-10.0.0.131:22-10.0.0.1:57208.service: Deactivated successfully. Dec 12 17:49:03.117183 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:49:03.118059 systemd-logind[1476]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:49:03.121317 systemd[1]: Started sshd@14-10.0.0.131:22-10.0.0.1:57224.service - OpenSSH per-connection server daemon (10.0.0.1:57224). Dec 12 17:49:03.122687 systemd-logind[1476]: Removed session 14. Dec 12 17:49:03.206177 sshd[5012]: Accepted publickey for core from 10.0.0.1 port 57224 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:49:03.207836 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:49:03.212787 systemd-logind[1476]: New session 15 of user core. Dec 12 17:49:03.221956 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:49:03.868427 sshd[5015]: Connection closed by 10.0.0.1 port 57224 Dec 12 17:49:03.868886 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Dec 12 17:49:03.879136 systemd[1]: sshd@14-10.0.0.131:22-10.0.0.1:57224.service: Deactivated successfully. Dec 12 17:49:03.884373 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:49:03.888155 systemd-logind[1476]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:49:03.894264 systemd[1]: Started sshd@15-10.0.0.131:22-10.0.0.1:57228.service - OpenSSH per-connection server daemon (10.0.0.1:57228). Dec 12 17:49:03.896232 systemd-logind[1476]: Removed session 15. Dec 12 17:49:03.949828 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 57228 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:49:03.951029 sshd-session[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:49:03.955175 systemd-logind[1476]: New session 16 of user core. Dec 12 17:49:03.961019 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:49:04.252610 sshd[5039]: Connection closed by 10.0.0.1 port 57228 Dec 12 17:49:04.253735 sshd-session[5036]: pam_unix(sshd:session): session closed for user core Dec 12 17:49:04.266730 systemd[1]: sshd@15-10.0.0.131:22-10.0.0.1:57228.service: Deactivated successfully. Dec 12 17:49:04.269257 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:49:04.271057 systemd-logind[1476]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:49:04.276428 systemd[1]: Started sshd@16-10.0.0.131:22-10.0.0.1:57240.service - OpenSSH per-connection server daemon (10.0.0.1:57240). Dec 12 17:49:04.276950 systemd-logind[1476]: Removed session 16. Dec 12 17:49:04.340302 sshd[5050]: Accepted publickey for core from 10.0.0.1 port 57240 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:49:04.341593 sshd-session[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:49:04.345685 systemd-logind[1476]: New session 17 of user core. Dec 12 17:49:04.360892 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:49:04.514091 sshd[5054]: Connection closed by 10.0.0.1 port 57240 Dec 12 17:49:04.514264 sshd-session[5050]: pam_unix(sshd:session): session closed for user core Dec 12 17:49:04.518344 systemd[1]: sshd@16-10.0.0.131:22-10.0.0.1:57240.service: Deactivated successfully. Dec 12 17:49:04.520879 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:49:04.522607 systemd-logind[1476]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:49:04.523780 systemd-logind[1476]: Removed session 17. Dec 12 17:49:06.309976 containerd[1498]: time="2025-12-12T17:49:06.309939755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:49:06.508607 containerd[1498]: time="2025-12-12T17:49:06.508423550Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:49:06.509457 containerd[1498]: time="2025-12-12T17:49:06.509384476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:49:06.509457 containerd[1498]: time="2025-12-12T17:49:06.509431396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:49:06.509648 kubelet[2647]: E1212 17:49:06.509531 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:49:06.509648 kubelet[2647]: E1212 17:49:06.509586 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:49:06.514639 containerd[1498]: time="2025-12-12T17:49:06.514389908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:49:06.518773 kubelet[2647]: E1212 17:49:06.517830 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qsvqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nrs7m_calico-system(9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:49:06.522043 kubelet[2647]: E1212 17:49:06.518973 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nrs7m" podUID="9cda2fbf-77ea-4fc5-a7c0-77df9ce96ffe" Dec 12 17:49:06.727448 containerd[1498]: time="2025-12-12T17:49:06.727250714Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:49:06.728591 containerd[1498]: time="2025-12-12T17:49:06.728360362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:49:06.728591 containerd[1498]: time="2025-12-12T17:49:06.728389442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:49:06.729168 kubelet[2647]: E1212 17:49:06.728973 2647 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:49:06.729168 kubelet[2647]: E1212 17:49:06.729023 2647 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:49:06.730466 kubelet[2647]: E1212 17:49:06.729454 2647 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hxbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84f498ff75-dpfb7_calico-apiserver(bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:49:06.732573 kubelet[2647]: E1212 17:49:06.731841 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f498ff75-dpfb7" podUID="bbff352b-faf1-4f78-9dff-cf0cfd8ed7b8" Dec 12 17:49:08.320278 kubelet[2647]: E1212 17:49:08.317819 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-775b66bf7b-w8t5k" podUID="89fc1364-299e-46d0-8995-62a57f819657" Dec 12 17:49:09.526638 systemd[1]: Started sshd@17-10.0.0.131:22-10.0.0.1:57246.service - OpenSSH per-connection server daemon (10.0.0.1:57246). Dec 12 17:49:09.583199 sshd[5077]: Accepted publickey for core from 10.0.0.1 port 57246 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:49:09.584316 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:49:09.588744 systemd-logind[1476]: New session 18 of user core. Dec 12 17:49:09.594862 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:49:09.742179 sshd[5080]: Connection closed by 10.0.0.1 port 57246 Dec 12 17:49:09.742455 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Dec 12 17:49:09.746076 systemd[1]: sshd@17-10.0.0.131:22-10.0.0.1:57246.service: Deactivated successfully. Dec 12 17:49:09.749497 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:49:09.750362 systemd-logind[1476]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:49:09.751824 systemd-logind[1476]: Removed session 18. Dec 12 17:49:12.310326 kubelet[2647]: E1212 17:49:12.310231 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-pjknm" podUID="d3e06765-0219-4a57-abc4-db29c031f701" Dec 12 17:49:14.309116 kubelet[2647]: E1212 17:49:14.309030 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84f498ff75-4h6kh" podUID="1212be5b-3ca2-46ca-81c7-0e9f1f64b2a7" Dec 12 17:49:14.309650 kubelet[2647]: E1212 17:49:14.309218 2647 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcc76bd57-jj9xs" podUID="6b58ec45-43db-4087-8da6-607a024a3bf6" Dec 12 17:49:14.754900 systemd[1]: Started sshd@18-10.0.0.131:22-10.0.0.1:41314.service - OpenSSH per-connection server daemon (10.0.0.1:41314). Dec 12 17:49:14.808663 sshd[5098]: Accepted publickey for core from 10.0.0.1 port 41314 ssh2: RSA SHA256:5/FINZQ4aLTsuJA7LFfvFAt+QpeNcgzirVlbIqFa6T0 Dec 12 17:49:14.809876 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:49:14.813335 systemd-logind[1476]: New session 19 of user core. Dec 12 17:49:14.825866 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:49:14.953406 sshd[5101]: Connection closed by 10.0.0.1 port 41314 Dec 12 17:49:14.953718 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Dec 12 17:49:14.956899 systemd[1]: sshd@18-10.0.0.131:22-10.0.0.1:41314.service: Deactivated successfully. Dec 12 17:49:14.958662 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:49:14.959350 systemd-logind[1476]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:49:14.960641 systemd-logind[1476]: Removed session 19.