Dec 12 17:45:17.769316 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:45:17.769339 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 12 17:45:17.769349 kernel: KASLR enabled Dec 12 17:45:17.769354 kernel: efi: EFI v2.7 by EDK II Dec 12 17:45:17.769360 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 12 17:45:17.769365 kernel: random: crng init done Dec 12 17:45:17.769372 kernel: secureboot: Secure boot disabled Dec 12 17:45:17.769377 kernel: ACPI: Early table checksum verification disabled Dec 12 17:45:17.769383 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 12 17:45:17.769390 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:45:17.769396 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:45:17.769402 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:45:17.769407 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:45:17.769413 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:45:17.769420 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:45:17.769427 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:45:17.769433 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:45:17.769439 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:45:17.769445 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:45:17.769451 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:45:17.769457 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:45:17.769463 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:45:17.769469 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 12 17:45:17.769475 kernel: Zone ranges: Dec 12 17:45:17.769481 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:45:17.769488 kernel: DMA32 empty Dec 12 17:45:17.769494 kernel: Normal empty Dec 12 17:45:17.769500 kernel: Device empty Dec 12 17:45:17.769506 kernel: Movable zone start for each node Dec 12 17:45:17.769512 kernel: Early memory node ranges Dec 12 17:45:17.769518 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 12 17:45:17.769524 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 12 17:45:17.769530 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 12 17:45:17.769536 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 12 17:45:17.769541 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 12 17:45:17.769547 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 12 17:45:17.769553 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 12 17:45:17.769560 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 12 17:45:17.769566 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 12 17:45:17.769572 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 12 17:45:17.769581 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 12 17:45:17.769587 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 12 17:45:17.769593 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:45:17.769601 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:45:17.769607 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:45:17.769614 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 12 17:45:17.769620 kernel: psci: probing for conduit method from ACPI. Dec 12 17:45:17.769626 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:45:17.769632 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:45:17.769639 kernel: psci: Trusted OS migration not required Dec 12 17:45:17.769645 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:45:17.769651 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:45:17.769658 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:45:17.769665 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:45:17.769672 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:45:17.769678 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:45:17.769685 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:45:17.769691 kernel: CPU features: detected: Spectre-v4 Dec 12 17:45:17.769697 kernel: CPU features: detected: Spectre-BHB Dec 12 17:45:17.769703 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:45:17.769710 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:45:17.769716 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:45:17.769722 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:45:17.769728 kernel: alternatives: applying boot alternatives Dec 12 17:45:17.769735 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:45:17.769755 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:45:17.769764 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:45:17.769771 kernel: Fallback order for Node 0: 0 Dec 12 17:45:17.769777 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:45:17.769783 kernel: Policy zone: DMA Dec 12 17:45:17.769790 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:45:17.769796 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:45:17.769802 kernel: software IO TLB: area num 4. Dec 12 17:45:17.769809 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:45:17.769815 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 12 17:45:17.769821 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:45:17.769829 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:45:17.769837 kernel: rcu: RCU event tracing is enabled. Dec 12 17:45:17.769843 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:45:17.769849 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:45:17.769856 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:45:17.769862 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:45:17.769869 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:45:17.769875 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:45:17.769882 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:45:17.769888 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:45:17.769894 kernel: GICv3: 256 SPIs implemented Dec 12 17:45:17.769902 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:45:17.769908 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:45:17.769914 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:45:17.769920 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:45:17.769926 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:45:17.769933 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:45:17.769939 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:45:17.769945 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:45:17.769952 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:45:17.769958 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:45:17.769964 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:45:17.769971 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:45:17.769978 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:45:17.769985 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:45:17.769991 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:45:17.769998 kernel: arm-pv: using stolen time PV Dec 12 17:45:17.770004 kernel: Console: colour dummy device 80x25 Dec 12 17:45:17.770011 kernel: ACPI: Core revision 20240827 Dec 12 17:45:17.770017 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:45:17.770024 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:45:17.770030 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:45:17.770037 kernel: landlock: Up and running. Dec 12 17:45:17.770045 kernel: SELinux: Initializing. Dec 12 17:45:17.770051 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:45:17.770058 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:45:17.770065 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:45:17.770071 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:45:17.770078 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:45:17.770084 kernel: Remapping and enabling EFI services. Dec 12 17:45:17.770091 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:45:17.770097 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:45:17.770111 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:45:17.770118 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:45:17.770124 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:45:17.770133 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:45:17.770139 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:45:17.770146 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:45:17.770153 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:45:17.770160 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:45:17.770168 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:45:17.770175 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:45:17.770182 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:45:17.770188 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:45:17.770195 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:45:17.770202 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:45:17.770209 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:45:17.770216 kernel: SMP: Total of 4 processors activated. Dec 12 17:45:17.770222 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:45:17.770237 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:45:17.770245 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:45:17.770252 kernel: CPU features: detected: Common not Private translations Dec 12 17:45:17.770258 kernel: CPU features: detected: CRC32 instructions Dec 12 17:45:17.770266 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:45:17.770272 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:45:17.770279 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:45:17.770286 kernel: CPU features: detected: Privileged Access Never Dec 12 17:45:17.770293 kernel: CPU features: detected: RAS Extension Support Dec 12 17:45:17.770301 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:45:17.770308 kernel: alternatives: applying system-wide alternatives Dec 12 17:45:17.770315 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:45:17.770322 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Dec 12 17:45:17.770329 kernel: devtmpfs: initialized Dec 12 17:45:17.770336 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:45:17.770343 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:45:17.770349 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:45:17.770356 kernel: 0 pages in range for non-PLT usage Dec 12 17:45:17.770364 kernel: 508400 pages in range for PLT usage Dec 12 17:45:17.770371 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:45:17.770378 kernel: SMBIOS 3.0.0 present. Dec 12 17:45:17.770385 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:45:17.770392 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:45:17.770398 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:45:17.770405 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:45:17.770412 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:45:17.770419 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:45:17.770427 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:45:17.770434 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Dec 12 17:45:17.770441 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:45:17.770448 kernel: cpuidle: using governor menu Dec 12 17:45:17.770455 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:45:17.770462 kernel: ASID allocator initialised with 32768 entries Dec 12 17:45:17.770468 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:45:17.770475 kernel: Serial: AMBA PL011 UART driver Dec 12 17:45:17.770482 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:45:17.770490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:45:17.770497 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:45:17.770504 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:45:17.770510 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:45:17.770517 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:45:17.770524 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:45:17.770531 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:45:17.770537 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:45:17.770544 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:45:17.770552 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:45:17.770559 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:45:17.770566 kernel: ACPI: Interpreter enabled Dec 12 17:45:17.770572 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:45:17.770579 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:45:17.770586 kernel: ACPI: CPU0 has been hot-added Dec 12 17:45:17.770593 kernel: ACPI: CPU1 has been hot-added Dec 12 17:45:17.770599 kernel: ACPI: CPU2 has been hot-added Dec 12 17:45:17.770606 kernel: ACPI: CPU3 has been hot-added Dec 12 17:45:17.770613 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:45:17.770621 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:45:17.770628 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:45:17.770776 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:45:17.770844 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:45:17.770902 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:45:17.770960 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:45:17.771016 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:45:17.771028 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:45:17.771035 kernel: PCI host bridge to bus 0000:00 Dec 12 17:45:17.771101 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:45:17.771154 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:45:17.771207 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:45:17.771269 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:45:17.771347 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:45:17.771419 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:45:17.771479 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:45:17.771538 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:45:17.771597 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:45:17.771655 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:45:17.771714 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:45:17.771868 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:45:17.771927 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:45:17.771978 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:45:17.772030 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:45:17.772040 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:45:17.772047 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:45:17.772054 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:45:17.772061 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:45:17.772071 kernel: iommu: Default domain type: Translated Dec 12 17:45:17.772079 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:45:17.772085 kernel: efivars: Registered efivars operations Dec 12 17:45:17.772092 kernel: vgaarb: loaded Dec 12 17:45:17.772099 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:45:17.772106 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:45:17.772113 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:45:17.772120 kernel: pnp: PnP ACPI init Dec 12 17:45:17.772305 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:45:17.772320 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:45:17.772327 kernel: NET: Registered PF_INET protocol family Dec 12 17:45:17.772334 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:45:17.772341 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:45:17.772348 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:45:17.772355 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:45:17.772362 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:45:17.772369 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:45:17.772377 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:45:17.772384 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:45:17.772392 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:45:17.772398 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:45:17.772405 kernel: kvm [1]: HYP mode not available Dec 12 17:45:17.772412 kernel: Initialise system trusted keyrings Dec 12 17:45:17.772419 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:45:17.772426 kernel: Key type asymmetric registered Dec 12 17:45:17.772433 kernel: Asymmetric key parser 'x509' registered Dec 12 17:45:17.772441 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:45:17.772448 kernel: io scheduler mq-deadline registered Dec 12 17:45:17.772455 kernel: io scheduler kyber registered Dec 12 17:45:17.772462 kernel: io scheduler bfq registered Dec 12 17:45:17.772469 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:45:17.772475 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:45:17.772483 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:45:17.772544 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:45:17.772554 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:45:17.772563 kernel: thunder_xcv, ver 1.0 Dec 12 17:45:17.772570 kernel: thunder_bgx, ver 1.0 Dec 12 17:45:17.772577 kernel: nicpf, ver 1.0 Dec 12 17:45:17.772584 kernel: nicvf, ver 1.0 Dec 12 17:45:17.772648 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:45:17.772704 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:45:17 UTC (1765561517) Dec 12 17:45:17.772713 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:45:17.772720 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:45:17.772729 kernel: watchdog: NMI not fully supported Dec 12 17:45:17.772736 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:45:17.772743 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:45:17.772761 kernel: Segment Routing with IPv6 Dec 12 17:45:17.772768 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:45:17.772775 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:45:17.772782 kernel: Key type dns_resolver registered Dec 12 17:45:17.772789 kernel: registered taskstats version 1 Dec 12 17:45:17.772795 kernel: Loading compiled-in X.509 certificates Dec 12 17:45:17.772803 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 12 17:45:17.772812 kernel: Demotion targets for Node 0: null Dec 12 17:45:17.772819 kernel: Key type .fscrypt registered Dec 12 17:45:17.772825 kernel: Key type fscrypt-provisioning registered Dec 12 17:45:17.772832 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:45:17.772839 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:45:17.772846 kernel: ima: No architecture policies found Dec 12 17:45:17.772853 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:45:17.772860 kernel: clk: Disabling unused clocks Dec 12 17:45:17.772866 kernel: PM: genpd: Disabling unused power domains Dec 12 17:45:17.772875 kernel: Warning: unable to open an initial console. Dec 12 17:45:17.772882 kernel: Freeing unused kernel memory: 39552K Dec 12 17:45:17.772889 kernel: Run /init as init process Dec 12 17:45:17.772895 kernel: with arguments: Dec 12 17:45:17.772902 kernel: /init Dec 12 17:45:17.772909 kernel: with environment: Dec 12 17:45:17.772916 kernel: HOME=/ Dec 12 17:45:17.772923 kernel: TERM=linux Dec 12 17:45:17.772931 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:45:17.772943 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:45:17.772951 systemd[1]: Detected virtualization kvm. Dec 12 17:45:17.772958 systemd[1]: Detected architecture arm64. Dec 12 17:45:17.772965 systemd[1]: Running in initrd. Dec 12 17:45:17.772972 systemd[1]: No hostname configured, using default hostname. Dec 12 17:45:17.772980 systemd[1]: Hostname set to . Dec 12 17:45:17.772987 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:45:17.772996 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:45:17.773004 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:45:17.773011 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:45:17.773019 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:45:17.773027 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:45:17.773035 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:45:17.773043 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:45:17.773052 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 17:45:17.773060 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 17:45:17.773068 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:45:17.773075 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:45:17.773083 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:45:17.773091 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:45:17.773098 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:45:17.773106 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:45:17.773115 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:45:17.773123 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:45:17.773130 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:45:17.773138 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:45:17.773146 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:45:17.773154 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:45:17.773161 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:45:17.773169 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:45:17.773178 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:45:17.773186 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:45:17.773193 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:45:17.773201 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:45:17.773209 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:45:17.773216 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:45:17.773224 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:45:17.773239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:45:17.773246 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:45:17.773257 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:45:17.773264 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:45:17.773272 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:45:17.773299 systemd-journald[245]: Collecting audit messages is disabled. Dec 12 17:45:17.773320 systemd-journald[245]: Journal started Dec 12 17:45:17.773338 systemd-journald[245]: Runtime Journal (/run/log/journal/860b4a00a3f745c681fcb2efc82d5a5e) is 6M, max 48.5M, 42.4M free. Dec 12 17:45:17.764257 systemd-modules-load[246]: Inserted module 'overlay' Dec 12 17:45:17.776413 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:45:17.778782 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:45:17.778805 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:45:17.780840 kernel: Bridge firewalling registered Dec 12 17:45:17.780703 systemd-modules-load[246]: Inserted module 'br_netfilter' Dec 12 17:45:17.782006 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:45:17.783358 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:45:17.787647 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:45:17.789436 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:45:17.791359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:45:17.800346 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:45:17.806847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:45:17.810423 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:45:17.813841 systemd-tmpfiles[273]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:45:17.815536 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:45:17.816986 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:45:17.820010 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:45:17.822287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:45:17.843560 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:45:17.857737 systemd-resolved[291]: Positive Trust Anchors: Dec 12 17:45:17.857772 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:45:17.857804 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:45:17.862612 systemd-resolved[291]: Defaulting to hostname 'linux'. Dec 12 17:45:17.863592 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:45:17.866956 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:45:17.916758 kernel: SCSI subsystem initialized Dec 12 17:45:17.920771 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:45:17.928791 kernel: iscsi: registered transport (tcp) Dec 12 17:45:17.940788 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:45:17.940818 kernel: QLogic iSCSI HBA Driver Dec 12 17:45:17.958195 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:45:17.981655 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:45:17.983846 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:45:18.030316 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:45:18.032640 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:45:18.094783 kernel: raid6: neonx8 gen() 15730 MB/s Dec 12 17:45:18.111765 kernel: raid6: neonx4 gen() 15723 MB/s Dec 12 17:45:18.128778 kernel: raid6: neonx2 gen() 13123 MB/s Dec 12 17:45:18.145763 kernel: raid6: neonx1 gen() 10388 MB/s Dec 12 17:45:18.162762 kernel: raid6: int64x8 gen() 6870 MB/s Dec 12 17:45:18.179762 kernel: raid6: int64x4 gen() 7316 MB/s Dec 12 17:45:18.196777 kernel: raid6: int64x2 gen() 6071 MB/s Dec 12 17:45:18.213763 kernel: raid6: int64x1 gen() 5022 MB/s Dec 12 17:45:18.213782 kernel: raid6: using algorithm neonx8 gen() 15730 MB/s Dec 12 17:45:18.230776 kernel: raid6: .... xor() 11579 MB/s, rmw enabled Dec 12 17:45:18.230797 kernel: raid6: using neon recovery algorithm Dec 12 17:45:18.236038 kernel: xor: measuring software checksum speed Dec 12 17:45:18.236057 kernel: 8regs : 21624 MB/sec Dec 12 17:45:18.237220 kernel: 32regs : 21687 MB/sec Dec 12 17:45:18.237239 kernel: arm64_neon : 28032 MB/sec Dec 12 17:45:18.237248 kernel: xor: using function: arm64_neon (28032 MB/sec) Dec 12 17:45:18.289776 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:45:18.295372 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:45:18.297792 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:45:18.322627 systemd-udevd[500]: Using default interface naming scheme 'v255'. Dec 12 17:45:18.326670 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:45:18.328959 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:45:18.348487 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Dec 12 17:45:18.372641 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:45:18.374733 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:45:18.426827 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:45:18.429345 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:45:18.484760 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:45:18.484941 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 17:45:18.488579 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:45:18.488660 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:45:18.500331 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:45:18.500354 kernel: GPT:9289727 != 19775487 Dec 12 17:45:18.500363 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:45:18.500379 kernel: GPT:9289727 != 19775487 Dec 12 17:45:18.500387 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:45:18.500395 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:45:18.500534 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:45:18.502368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:45:18.528147 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:45:18.533917 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:45:18.535827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:45:18.543467 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:45:18.544637 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 17:45:18.553441 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:45:18.560843 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:45:18.561954 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:45:18.563811 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:45:18.565661 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:45:18.568285 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:45:18.570054 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:45:18.584717 disk-uuid[590]: Primary Header is updated. Dec 12 17:45:18.584717 disk-uuid[590]: Secondary Entries is updated. Dec 12 17:45:18.584717 disk-uuid[590]: Secondary Header is updated. Dec 12 17:45:18.588719 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:45:18.591216 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:45:19.598045 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:45:19.598104 disk-uuid[595]: The operation has completed successfully. Dec 12 17:45:19.624221 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:45:19.624340 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:45:19.648375 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 17:45:19.674805 sh[610]: Success Dec 12 17:45:19.686780 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:45:19.686827 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:45:19.688454 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:45:19.695785 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:45:19.723572 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:45:19.726601 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 17:45:19.742441 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 17:45:19.747765 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (623) Dec 12 17:45:19.750091 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 12 17:45:19.750122 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:45:19.754193 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:45:19.754239 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:45:19.755228 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 17:45:19.756555 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:45:19.757859 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:45:19.758622 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:45:19.760306 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:45:19.794764 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (652) Dec 12 17:45:19.797135 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:45:19.797192 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:45:19.800026 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:45:19.800072 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:45:19.804764 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:45:19.806830 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:45:19.808710 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:45:19.874580 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:45:19.878802 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:45:19.916639 systemd-networkd[803]: lo: Link UP Dec 12 17:45:19.916652 systemd-networkd[803]: lo: Gained carrier Dec 12 17:45:19.917446 systemd-networkd[803]: Enumeration completed Dec 12 17:45:19.917532 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:45:19.920462 ignition[703]: Ignition 2.22.0 Dec 12 17:45:19.917930 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:45:19.920468 ignition[703]: Stage: fetch-offline Dec 12 17:45:19.917933 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:45:19.920501 ignition[703]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:45:19.918409 systemd-networkd[803]: eth0: Link UP Dec 12 17:45:19.920508 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:45:19.918801 systemd-networkd[803]: eth0: Gained carrier Dec 12 17:45:19.920588 ignition[703]: parsed url from cmdline: "" Dec 12 17:45:19.918812 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:45:19.920591 ignition[703]: no config URL provided Dec 12 17:45:19.919297 systemd[1]: Reached target network.target - Network. Dec 12 17:45:19.920595 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:45:19.920601 ignition[703]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:45:19.920620 ignition[703]: op(1): [started] loading QEMU firmware config module Dec 12 17:45:19.920626 ignition[703]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:45:19.926080 ignition[703]: op(1): [finished] loading QEMU firmware config module Dec 12 17:45:19.939802 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:45:19.977234 ignition[703]: parsing config with SHA512: a7a0e4f4a6f6fd762937a28cb276d631e142e22f96d36f78dfb333c4fc5fd072423b8a6545010911b932077ccc6058dbce2ff8e94adc070171e8c166c00e1a4d Dec 12 17:45:19.984252 unknown[703]: fetched base config from "system" Dec 12 17:45:19.984268 unknown[703]: fetched user config from "qemu" Dec 12 17:45:19.984873 ignition[703]: fetch-offline: fetch-offline passed Dec 12 17:45:19.984964 ignition[703]: Ignition finished successfully Dec 12 17:45:19.986803 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:45:19.988464 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:45:19.989279 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:45:20.028292 ignition[811]: Ignition 2.22.0 Dec 12 17:45:20.028306 ignition[811]: Stage: kargs Dec 12 17:45:20.028456 ignition[811]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:45:20.028464 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:45:20.029259 ignition[811]: kargs: kargs passed Dec 12 17:45:20.029303 ignition[811]: Ignition finished successfully Dec 12 17:45:20.033818 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:45:20.035670 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:45:20.070946 ignition[819]: Ignition 2.22.0 Dec 12 17:45:20.070962 ignition[819]: Stage: disks Dec 12 17:45:20.071096 ignition[819]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:45:20.071105 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:45:20.071851 ignition[819]: disks: disks passed Dec 12 17:45:20.071900 ignition[819]: Ignition finished successfully Dec 12 17:45:20.074821 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:45:20.076456 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:45:20.077559 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:45:20.079380 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:45:20.081067 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:45:20.082799 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:45:20.085234 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:45:20.107753 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 17:45:20.112475 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:45:20.114679 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:45:20.176776 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 12 17:45:20.177276 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:45:20.178428 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:45:20.181372 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:45:20.183495 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:45:20.184449 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:45:20.184485 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:45:20.184508 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:45:20.197071 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:45:20.200879 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:45:20.203626 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (837) Dec 12 17:45:20.205634 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:45:20.205658 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:45:20.208274 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:45:20.208512 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:45:20.209369 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:45:20.233902 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:45:20.238139 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:45:20.241452 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:45:20.245232 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:45:20.308587 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:45:20.310516 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:45:20.312037 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:45:20.326772 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:45:20.347892 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:45:20.362914 ignition[950]: INFO : Ignition 2.22.0 Dec 12 17:45:20.362914 ignition[950]: INFO : Stage: mount Dec 12 17:45:20.364393 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:45:20.364393 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:45:20.364393 ignition[950]: INFO : mount: mount passed Dec 12 17:45:20.364393 ignition[950]: INFO : Ignition finished successfully Dec 12 17:45:20.365463 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:45:20.368770 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:45:20.755926 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:45:20.757409 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:45:20.776356 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (963) Dec 12 17:45:20.776387 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:45:20.776397 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:45:20.779761 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:45:20.779783 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:45:20.780922 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:45:20.811974 ignition[980]: INFO : Ignition 2.22.0 Dec 12 17:45:20.811974 ignition[980]: INFO : Stage: files Dec 12 17:45:20.813646 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:45:20.813646 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:45:20.813646 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:45:20.813646 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:45:20.813646 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:45:20.819851 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:45:20.819851 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:45:20.819851 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:45:20.819851 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:45:20.819851 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Dec 12 17:45:20.815667 unknown[980]: wrote ssh authorized keys file for user: core Dec 12 17:45:20.864547 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:45:20.935681 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Dec 12 17:45:20.937768 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:45:20.937768 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:45:20.937768 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:45:20.937768 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:45:20.937768 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:45:20.937768 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:45:20.937768 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:45:20.937768 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:45:20.952361 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:45:20.952361 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:45:20.952361 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:45:20.952361 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:45:20.952361 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:45:20.952361 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Dec 12 17:45:21.287511 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:45:21.476932 systemd-networkd[803]: eth0: Gained IPv6LL Dec 12 17:45:21.479699 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Dec 12 17:45:21.479699 ignition[980]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:45:21.483212 ignition[980]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:45:21.483212 ignition[980]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:45:21.483212 ignition[980]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:45:21.483212 ignition[980]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 17:45:21.483212 ignition[980]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:45:21.483212 ignition[980]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:45:21.483212 ignition[980]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 17:45:21.483212 ignition[980]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:45:21.497360 ignition[980]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:45:21.500703 ignition[980]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:45:21.502829 ignition[980]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:45:21.502829 ignition[980]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:45:21.502829 ignition[980]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:45:21.502829 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:45:21.502829 ignition[980]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:45:21.502829 ignition[980]: INFO : files: files passed Dec 12 17:45:21.502829 ignition[980]: INFO : Ignition finished successfully Dec 12 17:45:21.504938 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:45:21.507880 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:45:21.510778 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:45:21.523034 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:45:21.523989 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:45:21.524265 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:45:21.527983 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:45:21.527983 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:45:21.531116 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:45:21.531645 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:45:21.533724 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:45:21.536147 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:45:21.564050 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:45:21.564851 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:45:21.566238 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:45:21.568033 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:45:21.569759 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:45:21.570497 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:45:21.584168 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:45:21.586432 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:45:21.606534 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:45:21.607797 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:45:21.609908 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:45:21.611671 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:45:21.611824 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:45:21.614372 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:45:21.616369 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:45:21.618007 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:45:21.619697 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:45:21.621730 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:45:21.623834 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:45:21.625842 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:45:21.627721 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:45:21.629659 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:45:21.631572 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:45:21.633249 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:45:21.634788 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:45:21.634911 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:45:21.637331 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:45:21.639248 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:45:21.641117 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:45:21.641210 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:45:21.643180 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:45:21.643302 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:45:21.646044 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:45:21.646168 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:45:21.648031 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:45:21.649622 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:45:21.652789 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:45:21.654583 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:45:21.656689 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:45:21.658304 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:45:21.658389 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:45:21.659909 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:45:21.659986 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:45:21.661519 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:45:21.661631 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:45:21.663399 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:45:21.663502 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:45:21.665799 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:45:21.667645 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:45:21.667791 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:45:21.685303 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:45:21.686101 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:45:21.686225 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:45:21.687962 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:45:21.688073 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:45:21.693821 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:45:21.695792 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:45:21.700739 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:45:21.704315 ignition[1036]: INFO : Ignition 2.22.0 Dec 12 17:45:21.704315 ignition[1036]: INFO : Stage: umount Dec 12 17:45:21.707075 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:45:21.707075 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:45:21.707075 ignition[1036]: INFO : umount: umount passed Dec 12 17:45:21.707075 ignition[1036]: INFO : Ignition finished successfully Dec 12 17:45:21.708346 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:45:21.708452 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:45:21.710215 systemd[1]: Stopped target network.target - Network. Dec 12 17:45:21.711643 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:45:21.711702 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:45:21.713595 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:45:21.713639 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:45:21.715282 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:45:21.715328 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:45:21.716909 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:45:21.716947 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:45:21.718557 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:45:21.720161 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:45:21.726014 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:45:21.726121 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:45:21.728868 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 17:45:21.729118 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:45:21.729151 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:45:21.733519 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:45:21.733725 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:45:21.733927 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:45:21.736978 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:45:21.738922 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:45:21.738960 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:45:21.741734 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:45:21.742573 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:45:21.742641 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:45:21.744706 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:45:21.744757 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:45:21.747668 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:45:21.747707 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:45:21.749810 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:45:21.756023 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:45:21.756072 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 17:45:21.767219 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:45:21.767327 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:45:21.769275 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:45:21.769319 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:45:21.771955 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:45:21.772066 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:45:21.774076 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:45:21.774153 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:45:21.776167 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:45:21.776241 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:45:21.777757 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:45:21.777788 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:45:21.779546 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:45:21.779591 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:45:21.782263 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:45:21.782312 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:45:21.785047 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:45:21.785093 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:45:21.787943 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:45:21.789047 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:45:21.789104 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:45:21.791925 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:45:21.791965 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:45:21.794812 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:45:21.794851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:45:21.799119 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 17:45:21.799164 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 17:45:21.799194 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:45:21.818051 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:45:21.818170 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:45:21.820316 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:45:21.822834 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:45:21.831665 systemd[1]: Switching root. Dec 12 17:45:21.869938 systemd-journald[245]: Journal stopped Dec 12 17:45:22.608426 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Dec 12 17:45:22.608485 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:45:22.608501 kernel: SELinux: policy capability open_perms=1 Dec 12 17:45:22.608511 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:45:22.608524 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:45:22.608535 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:45:22.608545 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:45:22.608554 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:45:22.608569 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:45:22.608582 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:45:22.608592 kernel: audit: type=1403 audit(1765561522.047:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:45:22.608603 systemd[1]: Successfully loaded SELinux policy in 64.188ms. Dec 12 17:45:22.608615 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.318ms. Dec 12 17:45:22.608626 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:45:22.608638 systemd[1]: Detected virtualization kvm. Dec 12 17:45:22.608649 systemd[1]: Detected architecture arm64. Dec 12 17:45:22.608660 systemd[1]: Detected first boot. Dec 12 17:45:22.608670 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:45:22.608680 zram_generator::config[1084]: No configuration found. Dec 12 17:45:22.608695 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:45:22.608705 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:45:22.608716 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 17:45:22.608726 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:45:22.608736 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:45:22.608765 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:45:22.608777 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:45:22.608788 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:45:22.608801 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:45:22.608812 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:45:22.608822 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:45:22.608834 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:45:22.608845 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:45:22.608855 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:45:22.608866 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:45:22.608881 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:45:22.608892 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:45:22.608904 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:45:22.608915 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:45:22.608926 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:45:22.608937 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:45:22.608947 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:45:22.608958 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:45:22.608969 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:45:22.608986 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:45:22.609005 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:45:22.609016 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:45:22.609026 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:45:22.609038 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:45:22.609048 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:45:22.609059 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:45:22.609069 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:45:22.609080 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:45:22.609103 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:45:22.609115 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:45:22.609127 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:45:22.609139 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:45:22.609150 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:45:22.609160 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:45:22.609171 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:45:22.609181 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:45:22.609191 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:45:22.609210 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:45:22.609223 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:45:22.609235 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:45:22.609245 systemd[1]: Reached target machines.target - Containers. Dec 12 17:45:22.609256 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:45:22.609267 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:45:22.609278 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:45:22.609288 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:45:22.609300 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:45:22.609312 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:45:22.609322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:45:22.609332 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:45:22.609342 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:45:22.609352 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:45:22.609363 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:45:22.609374 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:45:22.609383 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:45:22.609395 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:45:22.609405 kernel: ACPI: bus type drm_connector registered Dec 12 17:45:22.609415 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:45:22.609426 kernel: fuse: init (API version 7.41) Dec 12 17:45:22.609437 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:45:22.609447 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:45:22.609457 kernel: loop: module loaded Dec 12 17:45:22.609467 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:45:22.609478 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:45:22.609489 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:45:22.609524 systemd-journald[1166]: Collecting audit messages is disabled. Dec 12 17:45:22.609545 systemd-journald[1166]: Journal started Dec 12 17:45:22.609565 systemd-journald[1166]: Runtime Journal (/run/log/journal/860b4a00a3f745c681fcb2efc82d5a5e) is 6M, max 48.5M, 42.4M free. Dec 12 17:45:22.397607 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:45:22.421724 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:45:22.422100 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:45:22.614486 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:45:22.614545 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 17:45:22.616086 systemd[1]: Stopped verity-setup.service. Dec 12 17:45:22.620901 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:45:22.621562 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:45:22.622917 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:45:22.624108 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:45:22.625239 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:45:22.626425 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:45:22.627736 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:45:22.629046 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:45:22.630513 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:45:22.632087 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:45:22.632276 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:45:22.633797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:45:22.633957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:45:22.635388 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:45:22.635548 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:45:22.636926 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:45:22.637102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:45:22.638767 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:45:22.638931 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:45:22.642068 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:45:22.642255 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:45:22.643865 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:45:22.645265 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:45:22.647009 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:45:22.648507 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:45:22.660267 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:45:22.662590 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:45:22.664669 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:45:22.665869 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:45:22.665905 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:45:22.667683 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:45:22.674522 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:45:22.675881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:45:22.676865 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:45:22.678736 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:45:22.680047 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:45:22.681890 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:45:22.683088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:45:22.685924 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:45:22.688073 systemd-journald[1166]: Time spent on flushing to /var/log/journal/860b4a00a3f745c681fcb2efc82d5a5e is 14.479ms for 884 entries. Dec 12 17:45:22.688073 systemd-journald[1166]: System Journal (/var/log/journal/860b4a00a3f745c681fcb2efc82d5a5e) is 8M, max 195.6M, 187.6M free. Dec 12 17:45:22.706490 systemd-journald[1166]: Received client request to flush runtime journal. Dec 12 17:45:22.688952 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:45:22.701972 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:45:22.704823 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:45:22.709301 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:45:22.710806 kernel: loop0: detected capacity change from 0 to 100632 Dec 12 17:45:22.711628 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:45:22.713730 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:45:22.720784 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:45:22.728885 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:45:22.730279 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:45:22.731885 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:45:22.734277 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:45:22.739167 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:45:22.742765 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:45:22.752779 kernel: loop1: detected capacity change from 0 to 119840 Dec 12 17:45:22.765301 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:45:22.766598 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Dec 12 17:45:22.766609 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Dec 12 17:45:22.770250 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:45:22.805786 kernel: loop2: detected capacity change from 0 to 207008 Dec 12 17:45:22.844785 kernel: loop3: detected capacity change from 0 to 100632 Dec 12 17:45:22.850780 kernel: loop4: detected capacity change from 0 to 119840 Dec 12 17:45:22.857769 kernel: loop5: detected capacity change from 0 to 207008 Dec 12 17:45:22.862795 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 17:45:22.863180 (sd-merge)[1226]: Merged extensions into '/usr'. Dec 12 17:45:22.866610 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:45:22.866720 systemd[1]: Reloading... Dec 12 17:45:22.943773 zram_generator::config[1255]: No configuration found. Dec 12 17:45:22.957348 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:45:23.073390 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:45:23.073893 systemd[1]: Reloading finished in 206 ms. Dec 12 17:45:23.104612 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:45:23.106210 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:45:23.120964 systemd[1]: Starting ensure-sysext.service... Dec 12 17:45:23.122733 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:45:23.128122 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:45:23.134000 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:45:23.135278 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:45:23.135292 systemd[1]: Reloading... Dec 12 17:45:23.137022 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:45:23.137051 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:45:23.137310 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:45:23.137494 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 17:45:23.138116 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 17:45:23.138332 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Dec 12 17:45:23.138380 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Dec 12 17:45:23.141069 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:45:23.141083 systemd-tmpfiles[1287]: Skipping /boot Dec 12 17:45:23.146774 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:45:23.146789 systemd-tmpfiles[1287]: Skipping /boot Dec 12 17:45:23.167154 systemd-udevd[1290]: Using default interface naming scheme 'v255'. Dec 12 17:45:23.181797 zram_generator::config[1316]: No configuration found. Dec 12 17:45:23.360898 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:45:23.361047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:45:23.362532 systemd[1]: Reloading finished in 226 ms. Dec 12 17:45:23.378585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:45:23.393920 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:45:23.412166 systemd[1]: Finished ensure-sysext.service. Dec 12 17:45:23.429350 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:45:23.431724 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:45:23.432903 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:45:23.444551 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:45:23.447893 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:45:23.450392 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:45:23.454029 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:45:23.455358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:45:23.456287 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:45:23.457652 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:45:23.459259 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:45:23.464637 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:45:23.467663 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:45:23.470789 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:45:23.475581 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:45:23.478861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:45:23.482107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:45:23.482331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:45:23.485963 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:45:23.494777 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:45:23.496335 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:45:23.501798 augenrules[1433]: No rules Dec 12 17:45:23.500140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:45:23.501812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:45:23.503248 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:45:23.503437 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:45:23.505978 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:45:23.506165 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:45:23.508010 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:45:23.510778 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:45:23.520204 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:45:23.520366 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:45:23.522029 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:45:23.524602 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:45:23.525681 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:45:23.530962 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:45:23.532726 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:45:23.538094 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:45:23.560279 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:45:23.615070 systemd-networkd[1415]: lo: Link UP Dec 12 17:45:23.615081 systemd-networkd[1415]: lo: Gained carrier Dec 12 17:45:23.616288 systemd-networkd[1415]: Enumeration completed Dec 12 17:45:23.616461 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:45:23.616985 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:45:23.617072 systemd-networkd[1415]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:45:23.617685 systemd-networkd[1415]: eth0: Link UP Dec 12 17:45:23.617942 systemd-networkd[1415]: eth0: Gained carrier Dec 12 17:45:23.618032 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:45:23.620051 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:45:23.621656 systemd-resolved[1418]: Positive Trust Anchors: Dec 12 17:45:23.621666 systemd-resolved[1418]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:45:23.621696 systemd-resolved[1418]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:45:23.626932 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:45:23.628177 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:45:23.629344 systemd-resolved[1418]: Defaulting to hostname 'linux'. Dec 12 17:45:23.629493 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:45:23.630720 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:45:23.631909 systemd[1]: Reached target network.target - Network. Dec 12 17:45:23.632706 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:45:23.633894 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:45:23.634923 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:45:23.635818 systemd-networkd[1415]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:45:23.636109 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:45:23.636298 systemd-timesyncd[1419]: Network configuration changed, trying to establish connection. Dec 12 17:45:23.637547 systemd-timesyncd[1419]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:45:23.637628 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:45:23.637734 systemd-timesyncd[1419]: Initial clock synchronization to Fri 2025-12-12 17:45:23.455727 UTC. Dec 12 17:45:23.638864 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:45:23.640044 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:45:23.641202 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:45:23.641235 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:45:23.642212 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:45:23.643911 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:45:23.646422 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:45:23.649002 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:45:23.650412 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:45:23.651906 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:45:23.655025 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:45:23.656392 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:45:23.658556 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:45:23.659943 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:45:23.661462 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:45:23.662442 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:45:23.663356 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:45:23.663390 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:45:23.664438 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:45:23.666430 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:45:23.668355 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:45:23.670559 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:45:23.672602 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:45:23.673700 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:45:23.676891 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:45:23.677425 jq[1472]: false Dec 12 17:45:23.678867 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:45:23.680922 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:45:23.683878 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:45:23.688513 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:45:23.689294 extend-filesystems[1473]: Found /dev/vda6 Dec 12 17:45:23.690791 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:45:23.691276 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:45:23.692038 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:45:23.694235 extend-filesystems[1473]: Found /dev/vda9 Dec 12 17:45:23.697769 extend-filesystems[1473]: Checking size of /dev/vda9 Dec 12 17:45:23.697018 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:45:23.701010 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:45:23.704235 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:45:23.704465 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:45:23.704739 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:45:23.709344 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:45:23.712167 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:45:23.717849 jq[1490]: true Dec 12 17:45:23.712396 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:45:23.718926 extend-filesystems[1473]: Resized partition /dev/vda9 Dec 12 17:45:23.728344 extend-filesystems[1504]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:45:23.730741 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 17:45:23.736240 update_engine[1488]: I20251212 17:45:23.734010 1488 main.cc:92] Flatcar Update Engine starting Dec 12 17:45:23.738786 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 17:45:23.739269 tar[1499]: linux-arm64/LICENSE Dec 12 17:45:23.739776 tar[1499]: linux-arm64/helm Dec 12 17:45:23.749669 jq[1501]: true Dec 12 17:45:23.772866 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 17:45:23.785182 extend-filesystems[1504]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:45:23.785182 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:45:23.785182 extend-filesystems[1504]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 17:45:23.781557 dbus-daemon[1470]: [system] SELinux support is enabled Dec 12 17:45:23.781918 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:45:23.803353 extend-filesystems[1473]: Resized filesystem in /dev/vda9 Dec 12 17:45:23.804935 update_engine[1488]: I20251212 17:45:23.785601 1488 update_check_scheduler.cc:74] Next update check in 5m32s Dec 12 17:45:23.784847 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:45:23.785060 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:45:23.785083 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:45:23.785742 systemd-logind[1483]: New seat seat0. Dec 12 17:45:23.787562 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:45:23.787581 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:45:23.793289 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:45:23.794473 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:45:23.795683 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:45:23.797643 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:45:23.801412 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:45:23.822779 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:45:23.827617 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:45:23.831371 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:45:23.847466 locksmithd[1529]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:45:23.917143 containerd[1500]: time="2025-12-12T17:45:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:45:23.918089 containerd[1500]: time="2025-12-12T17:45:23.918036680Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 17:45:23.926859 containerd[1500]: time="2025-12-12T17:45:23.926823400Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.32µs" Dec 12 17:45:23.926859 containerd[1500]: time="2025-12-12T17:45:23.926855000Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:45:23.926930 containerd[1500]: time="2025-12-12T17:45:23.926874480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:45:23.927038 containerd[1500]: time="2025-12-12T17:45:23.927017600Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:45:23.927063 containerd[1500]: time="2025-12-12T17:45:23.927037360Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:45:23.927080 containerd[1500]: time="2025-12-12T17:45:23.927063920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:45:23.927137 containerd[1500]: time="2025-12-12T17:45:23.927120040Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:45:23.927158 containerd[1500]: time="2025-12-12T17:45:23.927135440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:45:23.927386 containerd[1500]: time="2025-12-12T17:45:23.927347040Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:45:23.927386 containerd[1500]: time="2025-12-12T17:45:23.927369840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:45:23.927386 containerd[1500]: time="2025-12-12T17:45:23.927381920Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:45:23.927458 containerd[1500]: time="2025-12-12T17:45:23.927389920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:45:23.927478 containerd[1500]: time="2025-12-12T17:45:23.927460880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:45:23.927653 containerd[1500]: time="2025-12-12T17:45:23.927633600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:45:23.927680 containerd[1500]: time="2025-12-12T17:45:23.927664720Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:45:23.927711 containerd[1500]: time="2025-12-12T17:45:23.927679280Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:45:23.927711 containerd[1500]: time="2025-12-12T17:45:23.927707240Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:45:23.927963 containerd[1500]: time="2025-12-12T17:45:23.927946400Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:45:23.928028 containerd[1500]: time="2025-12-12T17:45:23.928013040Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:45:23.931355 containerd[1500]: time="2025-12-12T17:45:23.931313440Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:45:23.931390 containerd[1500]: time="2025-12-12T17:45:23.931368960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:45:23.931390 containerd[1500]: time="2025-12-12T17:45:23.931383680Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:45:23.931448 containerd[1500]: time="2025-12-12T17:45:23.931394840Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:45:23.931448 containerd[1500]: time="2025-12-12T17:45:23.931407120Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:45:23.931448 containerd[1500]: time="2025-12-12T17:45:23.931425120Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:45:23.931448 containerd[1500]: time="2025-12-12T17:45:23.931442360Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:45:23.931522 containerd[1500]: time="2025-12-12T17:45:23.931456800Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:45:23.931522 containerd[1500]: time="2025-12-12T17:45:23.931467320Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:45:23.931522 containerd[1500]: time="2025-12-12T17:45:23.931477800Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:45:23.931522 containerd[1500]: time="2025-12-12T17:45:23.931487760Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:45:23.931522 containerd[1500]: time="2025-12-12T17:45:23.931499200Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:45:23.931637 containerd[1500]: time="2025-12-12T17:45:23.931615640Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:45:23.931662 containerd[1500]: time="2025-12-12T17:45:23.931640760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:45:23.931662 containerd[1500]: time="2025-12-12T17:45:23.931655240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:45:23.931693 containerd[1500]: time="2025-12-12T17:45:23.931667280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:45:23.931693 containerd[1500]: time="2025-12-12T17:45:23.931678280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:45:23.931693 containerd[1500]: time="2025-12-12T17:45:23.931688360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:45:23.931759 containerd[1500]: time="2025-12-12T17:45:23.931698640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:45:23.931759 containerd[1500]: time="2025-12-12T17:45:23.931708720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:45:23.931759 containerd[1500]: time="2025-12-12T17:45:23.931719640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:45:23.931759 containerd[1500]: time="2025-12-12T17:45:23.931729960Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:45:23.931834 containerd[1500]: time="2025-12-12T17:45:23.931739760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:45:23.931969 containerd[1500]: time="2025-12-12T17:45:23.931934320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:45:23.931969 containerd[1500]: time="2025-12-12T17:45:23.931957320Z" level=info msg="Start snapshots syncer" Dec 12 17:45:23.932020 containerd[1500]: time="2025-12-12T17:45:23.931991880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:45:23.932795 containerd[1500]: time="2025-12-12T17:45:23.932465040Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:45:23.932795 containerd[1500]: time="2025-12-12T17:45:23.932538640Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:45:23.932943 containerd[1500]: time="2025-12-12T17:45:23.932612560Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:45:23.933071 containerd[1500]: time="2025-12-12T17:45:23.933042200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:45:23.933148 containerd[1500]: time="2025-12-12T17:45:23.933134960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:45:23.933221 containerd[1500]: time="2025-12-12T17:45:23.933205920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:45:23.933278 containerd[1500]: time="2025-12-12T17:45:23.933264920Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:45:23.933336 containerd[1500]: time="2025-12-12T17:45:23.933320600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:45:23.933402 containerd[1500]: time="2025-12-12T17:45:23.933387160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:45:23.933457 containerd[1500]: time="2025-12-12T17:45:23.933444920Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:45:23.933541 containerd[1500]: time="2025-12-12T17:45:23.933524160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:45:23.933597 containerd[1500]: time="2025-12-12T17:45:23.933584720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:45:23.933652 containerd[1500]: time="2025-12-12T17:45:23.933640280Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:45:23.933765 containerd[1500]: time="2025-12-12T17:45:23.933736600Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:45:23.934176 containerd[1500]: time="2025-12-12T17:45:23.933846640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:45:23.934176 containerd[1500]: time="2025-12-12T17:45:23.933880400Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:45:23.934176 containerd[1500]: time="2025-12-12T17:45:23.933893480Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:45:23.934176 containerd[1500]: time="2025-12-12T17:45:23.933906320Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:45:23.934176 containerd[1500]: time="2025-12-12T17:45:23.933924720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:45:23.934176 containerd[1500]: time="2025-12-12T17:45:23.933941240Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:45:23.934942 containerd[1500]: time="2025-12-12T17:45:23.934820640Z" level=info msg="runtime interface created" Dec 12 17:45:23.934942 containerd[1500]: time="2025-12-12T17:45:23.934941200Z" level=info msg="created NRI interface" Dec 12 17:45:23.935004 containerd[1500]: time="2025-12-12T17:45:23.934954120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:45:23.935004 containerd[1500]: time="2025-12-12T17:45:23.934969280Z" level=info msg="Connect containerd service" Dec 12 17:45:23.935004 containerd[1500]: time="2025-12-12T17:45:23.934997560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:45:23.935679 containerd[1500]: time="2025-12-12T17:45:23.935652120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:45:24.003231 containerd[1500]: time="2025-12-12T17:45:24.003141371Z" level=info msg="Start subscribing containerd event" Dec 12 17:45:24.003231 containerd[1500]: time="2025-12-12T17:45:24.003231632Z" level=info msg="Start recovering state" Dec 12 17:45:24.003363 containerd[1500]: time="2025-12-12T17:45:24.003315170Z" level=info msg="Start event monitor" Dec 12 17:45:24.003363 containerd[1500]: time="2025-12-12T17:45:24.003329126Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:45:24.003363 containerd[1500]: time="2025-12-12T17:45:24.003336710Z" level=info msg="Start streaming server" Dec 12 17:45:24.003363 containerd[1500]: time="2025-12-12T17:45:24.003345583Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:45:24.003363 containerd[1500]: time="2025-12-12T17:45:24.003351916Z" level=info msg="runtime interface starting up..." Dec 12 17:45:24.003363 containerd[1500]: time="2025-12-12T17:45:24.003356920Z" level=info msg="starting plugins..." Dec 12 17:45:24.003458 containerd[1500]: time="2025-12-12T17:45:24.003368882Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:45:24.003535 containerd[1500]: time="2025-12-12T17:45:24.003506913Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:45:24.003588 containerd[1500]: time="2025-12-12T17:45:24.003575518Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:45:24.005924 containerd[1500]: time="2025-12-12T17:45:24.005813652Z" level=info msg="containerd successfully booted in 0.089090s" Dec 12 17:45:24.005910 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:45:24.052000 tar[1499]: linux-arm64/README.md Dec 12 17:45:24.067741 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:45:24.618481 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:45:24.636610 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:45:24.640400 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:45:24.659855 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:45:24.660815 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:45:24.663212 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:45:24.690522 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:45:24.695203 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:45:24.697150 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:45:24.698448 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:45:25.122862 systemd-networkd[1415]: eth0: Gained IPv6LL Dec 12 17:45:25.126817 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:45:25.128405 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:45:25.132332 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:45:25.149250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:45:25.151344 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:45:25.166025 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:45:25.166244 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:45:25.167704 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:45:25.171128 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:45:25.683998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:45:25.685411 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:45:25.687625 systemd[1]: Startup finished in 2.061s (kernel) + 4.431s (initrd) + 3.704s (userspace) = 10.196s. Dec 12 17:45:25.689066 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:45:26.034375 kubelet[1605]: E1212 17:45:26.034252 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:45:26.036646 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:45:26.036796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:45:26.037086 systemd[1]: kubelet.service: Consumed 740ms CPU time, 257.6M memory peak. Dec 12 17:45:30.771037 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:45:30.772046 systemd[1]: Started sshd@0-10.0.0.124:22-10.0.0.1:38414.service - OpenSSH per-connection server daemon (10.0.0.1:38414). Dec 12 17:45:30.837084 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 38414 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:45:30.838854 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:45:30.844889 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:45:30.845823 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:45:30.850982 systemd-logind[1483]: New session 1 of user core. Dec 12 17:45:30.860958 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:45:30.863240 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:45:30.884715 (systemd)[1623]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:45:30.887024 systemd-logind[1483]: New session c1 of user core. Dec 12 17:45:30.991813 systemd[1623]: Queued start job for default target default.target. Dec 12 17:45:31.010728 systemd[1623]: Created slice app.slice - User Application Slice. Dec 12 17:45:31.010795 systemd[1623]: Reached target paths.target - Paths. Dec 12 17:45:31.010842 systemd[1623]: Reached target timers.target - Timers. Dec 12 17:45:31.012058 systemd[1623]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:45:31.021854 systemd[1623]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:45:31.021922 systemd[1623]: Reached target sockets.target - Sockets. Dec 12 17:45:31.021958 systemd[1623]: Reached target basic.target - Basic System. Dec 12 17:45:31.021984 systemd[1623]: Reached target default.target - Main User Target. Dec 12 17:45:31.022007 systemd[1623]: Startup finished in 130ms. Dec 12 17:45:31.022197 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:45:31.023891 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:45:31.083987 systemd[1]: Started sshd@1-10.0.0.124:22-10.0.0.1:54008.service - OpenSSH per-connection server daemon (10.0.0.1:54008). Dec 12 17:45:31.126393 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 54008 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:45:31.127625 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:45:31.131407 systemd-logind[1483]: New session 2 of user core. Dec 12 17:45:31.144949 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:45:31.195432 sshd[1637]: Connection closed by 10.0.0.1 port 54008 Dec 12 17:45:31.195764 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Dec 12 17:45:31.205722 systemd[1]: sshd@1-10.0.0.124:22-10.0.0.1:54008.service: Deactivated successfully. Dec 12 17:45:31.207148 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:45:31.209312 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:45:31.211574 systemd[1]: Started sshd@2-10.0.0.124:22-10.0.0.1:54020.service - OpenSSH per-connection server daemon (10.0.0.1:54020). Dec 12 17:45:31.212209 systemd-logind[1483]: Removed session 2. Dec 12 17:45:31.269954 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 54020 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:45:31.271192 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:45:31.275570 systemd-logind[1483]: New session 3 of user core. Dec 12 17:45:31.283896 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:45:31.331938 sshd[1646]: Connection closed by 10.0.0.1 port 54020 Dec 12 17:45:31.331794 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Dec 12 17:45:31.345614 systemd[1]: sshd@2-10.0.0.124:22-10.0.0.1:54020.service: Deactivated successfully. Dec 12 17:45:31.349199 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:45:31.349965 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:45:31.351961 systemd[1]: Started sshd@3-10.0.0.124:22-10.0.0.1:54028.service - OpenSSH per-connection server daemon (10.0.0.1:54028). Dec 12 17:45:31.352576 systemd-logind[1483]: Removed session 3. Dec 12 17:45:31.412609 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 54028 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:45:31.413714 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:45:31.418133 systemd-logind[1483]: New session 4 of user core. Dec 12 17:45:31.432892 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:45:31.484463 sshd[1655]: Connection closed by 10.0.0.1 port 54028 Dec 12 17:45:31.484892 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Dec 12 17:45:31.491396 systemd[1]: sshd@3-10.0.0.124:22-10.0.0.1:54028.service: Deactivated successfully. Dec 12 17:45:31.493937 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:45:31.494680 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:45:31.497941 systemd[1]: Started sshd@4-10.0.0.124:22-10.0.0.1:54032.service - OpenSSH per-connection server daemon (10.0.0.1:54032). Dec 12 17:45:31.498417 systemd-logind[1483]: Removed session 4. Dec 12 17:45:31.550305 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 54032 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:45:31.551276 sshd-session[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:45:31.554946 systemd-logind[1483]: New session 5 of user core. Dec 12 17:45:31.566903 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:45:31.621534 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:45:31.621823 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:45:31.635615 sudo[1666]: pam_unix(sudo:session): session closed for user root Dec 12 17:45:31.637501 sshd[1665]: Connection closed by 10.0.0.1 port 54032 Dec 12 17:45:31.637405 sshd-session[1661]: pam_unix(sshd:session): session closed for user core Dec 12 17:45:31.647510 systemd[1]: sshd@4-10.0.0.124:22-10.0.0.1:54032.service: Deactivated successfully. Dec 12 17:45:31.650134 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:45:31.651901 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:45:31.653455 systemd-logind[1483]: Removed session 5. Dec 12 17:45:31.655266 systemd[1]: Started sshd@5-10.0.0.124:22-10.0.0.1:54042.service - OpenSSH per-connection server daemon (10.0.0.1:54042). Dec 12 17:45:31.710190 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 54042 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:45:31.711296 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:45:31.714973 systemd-logind[1483]: New session 6 of user core. Dec 12 17:45:31.724900 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:45:31.776396 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:45:31.776643 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:45:31.848976 sudo[1677]: pam_unix(sudo:session): session closed for user root Dec 12 17:45:31.853780 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:45:31.854031 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:45:31.862050 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:45:31.901116 augenrules[1699]: No rules Dec 12 17:45:31.902200 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:45:31.903807 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:45:31.904949 sudo[1676]: pam_unix(sudo:session): session closed for user root Dec 12 17:45:31.906024 sshd[1675]: Connection closed by 10.0.0.1 port 54042 Dec 12 17:45:31.906370 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Dec 12 17:45:31.925520 systemd[1]: sshd@5-10.0.0.124:22-10.0.0.1:54042.service: Deactivated successfully. Dec 12 17:45:31.927580 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:45:31.928251 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:45:31.930439 systemd[1]: Started sshd@6-10.0.0.124:22-10.0.0.1:54044.service - OpenSSH per-connection server daemon (10.0.0.1:54044). Dec 12 17:45:31.930914 systemd-logind[1483]: Removed session 6. Dec 12 17:45:31.989832 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 54044 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:45:31.990929 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:45:31.995360 systemd-logind[1483]: New session 7 of user core. Dec 12 17:45:32.004889 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:45:32.057633 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:45:32.057919 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:45:32.323003 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:45:32.336120 (dockerd)[1733]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:45:32.543363 dockerd[1733]: time="2025-12-12T17:45:32.543295658Z" level=info msg="Starting up" Dec 12 17:45:32.544389 dockerd[1733]: time="2025-12-12T17:45:32.544363774Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:45:32.554341 dockerd[1733]: time="2025-12-12T17:45:32.554312306Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:45:32.586683 dockerd[1733]: time="2025-12-12T17:45:32.586583694Z" level=info msg="Loading containers: start." Dec 12 17:45:32.596762 kernel: Initializing XFRM netlink socket Dec 12 17:45:32.787033 systemd-networkd[1415]: docker0: Link UP Dec 12 17:45:32.790273 dockerd[1733]: time="2025-12-12T17:45:32.790225765Z" level=info msg="Loading containers: done." Dec 12 17:45:32.801925 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3612400260-merged.mount: Deactivated successfully. Dec 12 17:45:32.804607 dockerd[1733]: time="2025-12-12T17:45:32.804559714Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:45:32.804675 dockerd[1733]: time="2025-12-12T17:45:32.804634009Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:45:32.804729 dockerd[1733]: time="2025-12-12T17:45:32.804711202Z" level=info msg="Initializing buildkit" Dec 12 17:45:32.825146 dockerd[1733]: time="2025-12-12T17:45:32.825112577Z" level=info msg="Completed buildkit initialization" Dec 12 17:45:32.832176 dockerd[1733]: time="2025-12-12T17:45:32.832134413Z" level=info msg="Daemon has completed initialization" Dec 12 17:45:32.832368 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:45:32.832498 dockerd[1733]: time="2025-12-12T17:45:32.832289076Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:45:33.346148 containerd[1500]: time="2025-12-12T17:45:33.346097602Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 12 17:45:34.031490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3474674812.mount: Deactivated successfully. Dec 12 17:45:35.072436 containerd[1500]: time="2025-12-12T17:45:35.072376365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:35.072930 containerd[1500]: time="2025-12-12T17:45:35.072896590Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26431961" Dec 12 17:45:35.073973 containerd[1500]: time="2025-12-12T17:45:35.073941378Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:35.076430 containerd[1500]: time="2025-12-12T17:45:35.076398143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:35.078472 containerd[1500]: time="2025-12-12T17:45:35.078429146Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.73229411s" Dec 12 17:45:35.078517 containerd[1500]: time="2025-12-12T17:45:35.078469773Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Dec 12 17:45:35.079258 containerd[1500]: time="2025-12-12T17:45:35.079165913Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 12 17:45:36.287174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:45:36.289908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:45:36.424825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:45:36.428603 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:45:36.460889 containerd[1500]: time="2025-12-12T17:45:36.460835006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:36.461662 containerd[1500]: time="2025-12-12T17:45:36.461624975Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618957" Dec 12 17:45:36.462749 containerd[1500]: time="2025-12-12T17:45:36.462704811Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:36.464776 containerd[1500]: time="2025-12-12T17:45:36.464741329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:36.473930 containerd[1500]: time="2025-12-12T17:45:36.473811755Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.394603976s" Dec 12 17:45:36.473930 containerd[1500]: time="2025-12-12T17:45:36.473846396Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Dec 12 17:45:36.474347 containerd[1500]: time="2025-12-12T17:45:36.474234532Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 12 17:45:36.499500 kubelet[2020]: E1212 17:45:36.499447 2020 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:45:36.502468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:45:36.502606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:45:36.503100 systemd[1]: kubelet.service: Consumed 149ms CPU time, 108.3M memory peak. Dec 12 17:45:37.681681 containerd[1500]: time="2025-12-12T17:45:37.681629254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:37.682171 containerd[1500]: time="2025-12-12T17:45:37.682105859Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618438" Dec 12 17:45:37.683099 containerd[1500]: time="2025-12-12T17:45:37.683069903Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:37.685377 containerd[1500]: time="2025-12-12T17:45:37.685354878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:37.687164 containerd[1500]: time="2025-12-12T17:45:37.687131257Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.212866578s" Dec 12 17:45:37.687223 containerd[1500]: time="2025-12-12T17:45:37.687166595Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Dec 12 17:45:37.687612 containerd[1500]: time="2025-12-12T17:45:37.687579177Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 12 17:45:38.708778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618227869.mount: Deactivated successfully. Dec 12 17:45:38.933759 containerd[1500]: time="2025-12-12T17:45:38.933689202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:38.934825 containerd[1500]: time="2025-12-12T17:45:38.934793599Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561801" Dec 12 17:45:38.936094 containerd[1500]: time="2025-12-12T17:45:38.935769528Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:38.939704 containerd[1500]: time="2025-12-12T17:45:38.939671730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:38.940360 containerd[1500]: time="2025-12-12T17:45:38.940317141Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.252707242s" Dec 12 17:45:38.940411 containerd[1500]: time="2025-12-12T17:45:38.940363697Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Dec 12 17:45:38.940915 containerd[1500]: time="2025-12-12T17:45:38.940828343Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 12 17:45:39.528681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount557160685.mount: Deactivated successfully. Dec 12 17:45:40.408687 containerd[1500]: time="2025-12-12T17:45:40.408636293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:40.409730 containerd[1500]: time="2025-12-12T17:45:40.409473960Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Dec 12 17:45:40.410560 containerd[1500]: time="2025-12-12T17:45:40.410516477Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:40.414216 containerd[1500]: time="2025-12-12T17:45:40.414167218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:40.415459 containerd[1500]: time="2025-12-12T17:45:40.415399105Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.474536075s" Dec 12 17:45:40.415459 containerd[1500]: time="2025-12-12T17:45:40.415436724Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Dec 12 17:45:40.415903 containerd[1500]: time="2025-12-12T17:45:40.415876980Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 12 17:45:40.960347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655840230.mount: Deactivated successfully. Dec 12 17:45:40.965126 containerd[1500]: time="2025-12-12T17:45:40.965074680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:45:40.965818 containerd[1500]: time="2025-12-12T17:45:40.965771845Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 12 17:45:40.966776 containerd[1500]: time="2025-12-12T17:45:40.966512374Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:45:40.969125 containerd[1500]: time="2025-12-12T17:45:40.969082581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:45:40.969840 containerd[1500]: time="2025-12-12T17:45:40.969806235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 553.901929ms" Dec 12 17:45:40.969886 containerd[1500]: time="2025-12-12T17:45:40.969839146Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 12 17:45:40.970373 containerd[1500]: time="2025-12-12T17:45:40.970334215Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 12 17:45:41.588862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748811623.mount: Deactivated successfully. Dec 12 17:45:43.209665 containerd[1500]: time="2025-12-12T17:45:43.209612076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:43.210423 containerd[1500]: time="2025-12-12T17:45:43.210371509Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Dec 12 17:45:43.211090 containerd[1500]: time="2025-12-12T17:45:43.211048171Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:43.214821 containerd[1500]: time="2025-12-12T17:45:43.213996025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:45:43.215234 containerd[1500]: time="2025-12-12T17:45:43.215193270Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.244825058s" Dec 12 17:45:43.215291 containerd[1500]: time="2025-12-12T17:45:43.215237949Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Dec 12 17:45:46.513913 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:45:46.515446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:45:46.670464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:45:46.682072 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:45:46.723770 kubelet[2181]: E1212 17:45:46.723680 2181 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:45:46.726372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:45:46.726621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:45:46.727039 systemd[1]: kubelet.service: Consumed 146ms CPU time, 106M memory peak. Dec 12 17:45:48.212036 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:45:48.212188 systemd[1]: kubelet.service: Consumed 146ms CPU time, 106M memory peak. Dec 12 17:45:48.214236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:45:48.240336 systemd[1]: Reload requested from client PID 2196 ('systemctl') (unit session-7.scope)... Dec 12 17:45:48.240354 systemd[1]: Reloading... Dec 12 17:45:48.311780 zram_generator::config[2241]: No configuration found. Dec 12 17:45:48.576245 systemd[1]: Reloading finished in 335 ms. Dec 12 17:45:48.624630 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:45:48.628105 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:45:48.629794 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:45:48.629856 systemd[1]: kubelet.service: Consumed 99ms CPU time, 95.1M memory peak. Dec 12 17:45:48.631332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:45:48.765343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:45:48.769378 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:45:48.804241 kubelet[2285]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:45:48.804241 kubelet[2285]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:45:48.804241 kubelet[2285]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:45:48.804573 kubelet[2285]: I1212 17:45:48.804283 2285 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:45:49.928926 kubelet[2285]: I1212 17:45:49.928878 2285 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:45:49.928926 kubelet[2285]: I1212 17:45:49.928909 2285 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:45:49.929365 kubelet[2285]: I1212 17:45:49.929313 2285 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:45:49.951800 kubelet[2285]: E1212 17:45:49.951738 2285 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:45:49.954777 kubelet[2285]: I1212 17:45:49.954430 2285 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:45:49.959012 kubelet[2285]: I1212 17:45:49.958991 2285 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:45:49.961750 kubelet[2285]: I1212 17:45:49.961714 2285 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:45:49.962480 kubelet[2285]: I1212 17:45:49.962420 2285 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:45:49.962671 kubelet[2285]: I1212 17:45:49.962472 2285 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:45:49.962793 kubelet[2285]: I1212 17:45:49.962742 2285 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:45:49.962793 kubelet[2285]: I1212 17:45:49.962768 2285 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:45:49.962992 kubelet[2285]: I1212 17:45:49.962958 2285 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:45:49.965522 kubelet[2285]: I1212 17:45:49.965489 2285 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:45:49.965522 kubelet[2285]: I1212 17:45:49.965516 2285 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:45:49.965576 kubelet[2285]: I1212 17:45:49.965540 2285 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:45:49.965576 kubelet[2285]: I1212 17:45:49.965559 2285 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:45:49.970022 kubelet[2285]: W1212 17:45:49.969149 2285 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Dec 12 17:45:49.970022 kubelet[2285]: E1212 17:45:49.969216 2285 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:45:49.970022 kubelet[2285]: W1212 17:45:49.969356 2285 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Dec 12 17:45:49.970022 kubelet[2285]: E1212 17:45:49.969407 2285 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.124:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:45:49.970204 kubelet[2285]: I1212 17:45:49.970081 2285 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:45:49.970849 kubelet[2285]: I1212 17:45:49.970818 2285 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:45:49.970962 kubelet[2285]: W1212 17:45:49.970947 2285 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:45:49.972099 kubelet[2285]: I1212 17:45:49.972071 2285 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:45:49.972159 kubelet[2285]: I1212 17:45:49.972111 2285 server.go:1287] "Started kubelet" Dec 12 17:45:49.972229 kubelet[2285]: I1212 17:45:49.972181 2285 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:45:49.972663 kubelet[2285]: I1212 17:45:49.972633 2285 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:45:49.974108 kubelet[2285]: I1212 17:45:49.973846 2285 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:45:49.974667 kubelet[2285]: I1212 17:45:49.974639 2285 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:45:49.974960 kubelet[2285]: I1212 17:45:49.974919 2285 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:45:49.976462 kubelet[2285]: I1212 17:45:49.976439 2285 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:45:49.977038 kubelet[2285]: E1212 17:45:49.977017 2285 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:45:49.977087 kubelet[2285]: I1212 17:45:49.977046 2285 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:45:49.977338 kubelet[2285]: I1212 17:45:49.977191 2285 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:45:49.977338 kubelet[2285]: I1212 17:45:49.977246 2285 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:45:49.977623 kubelet[2285]: W1212 17:45:49.977562 2285 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Dec 12 17:45:49.977623 kubelet[2285]: E1212 17:45:49.977607 2285 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:45:49.978085 kubelet[2285]: I1212 17:45:49.978064 2285 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:45:49.978160 kubelet[2285]: I1212 17:45:49.978137 2285 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:45:49.980508 kubelet[2285]: I1212 17:45:49.978902 2285 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:45:49.980508 kubelet[2285]: E1212 17:45:49.979107 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="200ms" Dec 12 17:45:49.980508 kubelet[2285]: E1212 17:45:49.979333 2285 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.188088e0172ca92b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:45:49.972089131 +0000 UTC m=+1.199510495,LastTimestamp:2025-12-12 17:45:49.972089131 +0000 UTC m=+1.199510495,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:45:49.989040 kubelet[2285]: I1212 17:45:49.989017 2285 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:45:49.989040 kubelet[2285]: I1212 17:45:49.989034 2285 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:45:49.989132 kubelet[2285]: I1212 17:45:49.989050 2285 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:45:49.990464 kubelet[2285]: I1212 17:45:49.990411 2285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:45:49.991457 kubelet[2285]: I1212 17:45:49.991422 2285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:45:49.991457 kubelet[2285]: I1212 17:45:49.991452 2285 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:45:49.991518 kubelet[2285]: I1212 17:45:49.991473 2285 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:45:49.991518 kubelet[2285]: I1212 17:45:49.991482 2285 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:45:49.991558 kubelet[2285]: E1212 17:45:49.991520 2285 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:45:50.078207 kubelet[2285]: E1212 17:45:50.078158 2285 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:45:50.092467 kubelet[2285]: E1212 17:45:50.092427 2285 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 12 17:45:50.119751 kubelet[2285]: W1212 17:45:50.119688 2285 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Dec 12 17:45:50.119812 kubelet[2285]: E1212 17:45:50.119781 2285 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.124:6443: connect: connection refused" logger="UnhandledError" Dec 12 17:45:50.119977 kubelet[2285]: I1212 17:45:50.119946 2285 policy_none.go:49] "None policy: Start" Dec 12 17:45:50.119977 kubelet[2285]: I1212 17:45:50.119969 2285 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:45:50.120023 kubelet[2285]: I1212 17:45:50.119982 2285 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:45:50.124661 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:45:50.141555 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:45:50.144641 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:45:50.161539 kubelet[2285]: I1212 17:45:50.161511 2285 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:45:50.162075 kubelet[2285]: I1212 17:45:50.161723 2285 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:45:50.162075 kubelet[2285]: I1212 17:45:50.161741 2285 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:45:50.162075 kubelet[2285]: I1212 17:45:50.162022 2285 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:45:50.162808 kubelet[2285]: E1212 17:45:50.162786 2285 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:45:50.162923 kubelet[2285]: E1212 17:45:50.162909 2285 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:45:50.180153 kubelet[2285]: E1212 17:45:50.180051 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="400ms" Dec 12 17:45:50.263385 kubelet[2285]: I1212 17:45:50.263345 2285 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:45:50.263832 kubelet[2285]: E1212 17:45:50.263797 2285 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Dec 12 17:45:50.302660 systemd[1]: Created slice kubepods-burstable-podcfa35905cea47101e8693ad2b2d8f378.slice - libcontainer container kubepods-burstable-podcfa35905cea47101e8693ad2b2d8f378.slice. Dec 12 17:45:50.312581 kubelet[2285]: E1212 17:45:50.312545 2285 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:45:50.315552 systemd[1]: Created slice kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice - libcontainer container kubepods-burstable-pod55d9ac750f8c9141f337af8b08cf5c9d.slice. Dec 12 17:45:50.322878 kubelet[2285]: E1212 17:45:50.322857 2285 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:45:50.325615 systemd[1]: Created slice kubepods-burstable-pod0a68423804124305a9de061f38780871.slice - libcontainer container kubepods-burstable-pod0a68423804124305a9de061f38780871.slice. Dec 12 17:45:50.327284 kubelet[2285]: E1212 17:45:50.327266 2285 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:45:50.379607 kubelet[2285]: I1212 17:45:50.379574 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfa35905cea47101e8693ad2b2d8f378-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cfa35905cea47101e8693ad2b2d8f378\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:45:50.379719 kubelet[2285]: I1212 17:45:50.379614 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:45:50.379719 kubelet[2285]: I1212 17:45:50.379633 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:45:50.379719 kubelet[2285]: I1212 17:45:50.379655 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:45:50.379719 kubelet[2285]: I1212 17:45:50.379673 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfa35905cea47101e8693ad2b2d8f378-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cfa35905cea47101e8693ad2b2d8f378\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:45:50.379719 kubelet[2285]: I1212 17:45:50.379688 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:45:50.379931 kubelet[2285]: I1212 17:45:50.379703 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:45:50.379931 kubelet[2285]: I1212 17:45:50.379718 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:45:50.379931 kubelet[2285]: I1212 17:45:50.379735 2285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfa35905cea47101e8693ad2b2d8f378-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cfa35905cea47101e8693ad2b2d8f378\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:45:50.465890 kubelet[2285]: I1212 17:45:50.465810 2285 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:45:50.466254 kubelet[2285]: E1212 17:45:50.466202 2285 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Dec 12 17:45:50.580571 kubelet[2285]: E1212 17:45:50.580517 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="800ms" Dec 12 17:45:50.614792 containerd[1500]: time="2025-12-12T17:45:50.614515088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cfa35905cea47101e8693ad2b2d8f378,Namespace:kube-system,Attempt:0,}" Dec 12 17:45:50.624184 containerd[1500]: time="2025-12-12T17:45:50.624150164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,}" Dec 12 17:45:50.628652 containerd[1500]: time="2025-12-12T17:45:50.628500852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,}" Dec 12 17:45:50.633166 containerd[1500]: time="2025-12-12T17:45:50.633118951Z" level=info msg="connecting to shim f27079f5a66a4bbc11e25b69299bff84294f369ec4c7d0b003b79232620373a2" address="unix:///run/containerd/s/a231b6f3ebac6c27d0efdd46c4ff88769f6e10869e355e3522a39f5a1152f809" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:45:50.648085 containerd[1500]: time="2025-12-12T17:45:50.647936568Z" level=info msg="connecting to shim fd69d5a23057454f07b454fd635c9b6d576dc9b243a67e4813dc91132774634a" address="unix:///run/containerd/s/5f48611da6ce9464f7f62af1085138c4e615ae74c6e9f438efa282918327ba0f" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:45:50.654930 systemd[1]: Started cri-containerd-f27079f5a66a4bbc11e25b69299bff84294f369ec4c7d0b003b79232620373a2.scope - libcontainer container f27079f5a66a4bbc11e25b69299bff84294f369ec4c7d0b003b79232620373a2. Dec 12 17:45:50.666129 containerd[1500]: time="2025-12-12T17:45:50.665986583Z" level=info msg="connecting to shim 3aa8363d94d47fb49ebe2667db16911bb7d2624e796289c6d708e77ab267ae2c" address="unix:///run/containerd/s/af75cdefbd7e3d9a287ca15c68e7e847b2aee1b1cab7d8b5ac103772ffedd8ad" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:45:50.678924 systemd[1]: Started cri-containerd-fd69d5a23057454f07b454fd635c9b6d576dc9b243a67e4813dc91132774634a.scope - libcontainer container fd69d5a23057454f07b454fd635c9b6d576dc9b243a67e4813dc91132774634a. Dec 12 17:45:50.690903 systemd[1]: Started cri-containerd-3aa8363d94d47fb49ebe2667db16911bb7d2624e796289c6d708e77ab267ae2c.scope - libcontainer container 3aa8363d94d47fb49ebe2667db16911bb7d2624e796289c6d708e77ab267ae2c. Dec 12 17:45:50.698533 containerd[1500]: time="2025-12-12T17:45:50.698492430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cfa35905cea47101e8693ad2b2d8f378,Namespace:kube-system,Attempt:0,} returns sandbox id \"f27079f5a66a4bbc11e25b69299bff84294f369ec4c7d0b003b79232620373a2\"" Dec 12 17:45:50.702562 containerd[1500]: time="2025-12-12T17:45:50.702075140Z" level=info msg="CreateContainer within sandbox \"f27079f5a66a4bbc11e25b69299bff84294f369ec4c7d0b003b79232620373a2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:45:50.717978 containerd[1500]: time="2025-12-12T17:45:50.717879181Z" level=info msg="Container b081ccdb1af4eadc6626e94da7f26d1fe1ea79ce9ab5b110a8207f24fd6ae4c0: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:45:50.728130 containerd[1500]: time="2025-12-12T17:45:50.728088771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:55d9ac750f8c9141f337af8b08cf5c9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd69d5a23057454f07b454fd635c9b6d576dc9b243a67e4813dc91132774634a\"" Dec 12 17:45:50.730699 containerd[1500]: time="2025-12-12T17:45:50.730314640Z" level=info msg="CreateContainer within sandbox \"fd69d5a23057454f07b454fd635c9b6d576dc9b243a67e4813dc91132774634a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:45:50.730699 containerd[1500]: time="2025-12-12T17:45:50.730579213Z" level=info msg="CreateContainer within sandbox \"f27079f5a66a4bbc11e25b69299bff84294f369ec4c7d0b003b79232620373a2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b081ccdb1af4eadc6626e94da7f26d1fe1ea79ce9ab5b110a8207f24fd6ae4c0\"" Dec 12 17:45:50.731403 containerd[1500]: time="2025-12-12T17:45:50.731376770Z" level=info msg="StartContainer for \"b081ccdb1af4eadc6626e94da7f26d1fe1ea79ce9ab5b110a8207f24fd6ae4c0\"" Dec 12 17:45:50.732791 containerd[1500]: time="2025-12-12T17:45:50.732737209Z" level=info msg="connecting to shim b081ccdb1af4eadc6626e94da7f26d1fe1ea79ce9ab5b110a8207f24fd6ae4c0" address="unix:///run/containerd/s/a231b6f3ebac6c27d0efdd46c4ff88769f6e10869e355e3522a39f5a1152f809" protocol=ttrpc version=3 Dec 12 17:45:50.739239 containerd[1500]: time="2025-12-12T17:45:50.739192251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0a68423804124305a9de061f38780871,Namespace:kube-system,Attempt:0,} returns sandbox id \"3aa8363d94d47fb49ebe2667db16911bb7d2624e796289c6d708e77ab267ae2c\"" Dec 12 17:45:50.739502 containerd[1500]: time="2025-12-12T17:45:50.739467617Z" level=info msg="Container f0d65d8b870b1b0ad862b2e711a93036eb26b00d884739e780fb591ba29e9c66: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:45:50.743684 containerd[1500]: time="2025-12-12T17:45:50.743534745Z" level=info msg="CreateContainer within sandbox \"3aa8363d94d47fb49ebe2667db16911bb7d2624e796289c6d708e77ab267ae2c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:45:50.745322 containerd[1500]: time="2025-12-12T17:45:50.745284269Z" level=info msg="CreateContainer within sandbox \"fd69d5a23057454f07b454fd635c9b6d576dc9b243a67e4813dc91132774634a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f0d65d8b870b1b0ad862b2e711a93036eb26b00d884739e780fb591ba29e9c66\"" Dec 12 17:45:50.745893 containerd[1500]: time="2025-12-12T17:45:50.745857744Z" level=info msg="StartContainer for \"f0d65d8b870b1b0ad862b2e711a93036eb26b00d884739e780fb591ba29e9c66\"" Dec 12 17:45:50.748359 containerd[1500]: time="2025-12-12T17:45:50.748327281Z" level=info msg="connecting to shim f0d65d8b870b1b0ad862b2e711a93036eb26b00d884739e780fb591ba29e9c66" address="unix:///run/containerd/s/5f48611da6ce9464f7f62af1085138c4e615ae74c6e9f438efa282918327ba0f" protocol=ttrpc version=3 Dec 12 17:45:50.752295 containerd[1500]: time="2025-12-12T17:45:50.752265100Z" level=info msg="Container 09a4d6c0a44084f020e3daaba09cd67f2bf472070df26c224827d34dff3b4f57: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:45:50.754940 systemd[1]: Started cri-containerd-b081ccdb1af4eadc6626e94da7f26d1fe1ea79ce9ab5b110a8207f24fd6ae4c0.scope - libcontainer container b081ccdb1af4eadc6626e94da7f26d1fe1ea79ce9ab5b110a8207f24fd6ae4c0. Dec 12 17:45:50.761783 containerd[1500]: time="2025-12-12T17:45:50.760470466Z" level=info msg="CreateContainer within sandbox \"3aa8363d94d47fb49ebe2667db16911bb7d2624e796289c6d708e77ab267ae2c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"09a4d6c0a44084f020e3daaba09cd67f2bf472070df26c224827d34dff3b4f57\"" Dec 12 17:45:50.761783 containerd[1500]: time="2025-12-12T17:45:50.760985223Z" level=info msg="StartContainer for \"09a4d6c0a44084f020e3daaba09cd67f2bf472070df26c224827d34dff3b4f57\"" Dec 12 17:45:50.762042 containerd[1500]: time="2025-12-12T17:45:50.762014016Z" level=info msg="connecting to shim 09a4d6c0a44084f020e3daaba09cd67f2bf472070df26c224827d34dff3b4f57" address="unix:///run/containerd/s/af75cdefbd7e3d9a287ca15c68e7e847b2aee1b1cab7d8b5ac103772ffedd8ad" protocol=ttrpc version=3 Dec 12 17:45:50.773080 systemd[1]: Started cri-containerd-f0d65d8b870b1b0ad862b2e711a93036eb26b00d884739e780fb591ba29e9c66.scope - libcontainer container f0d65d8b870b1b0ad862b2e711a93036eb26b00d884739e780fb591ba29e9c66. Dec 12 17:45:50.785134 systemd[1]: Started cri-containerd-09a4d6c0a44084f020e3daaba09cd67f2bf472070df26c224827d34dff3b4f57.scope - libcontainer container 09a4d6c0a44084f020e3daaba09cd67f2bf472070df26c224827d34dff3b4f57. Dec 12 17:45:50.812753 containerd[1500]: time="2025-12-12T17:45:50.812665610Z" level=info msg="StartContainer for \"b081ccdb1af4eadc6626e94da7f26d1fe1ea79ce9ab5b110a8207f24fd6ae4c0\" returns successfully" Dec 12 17:45:50.823394 containerd[1500]: time="2025-12-12T17:45:50.823312852Z" level=info msg="StartContainer for \"f0d65d8b870b1b0ad862b2e711a93036eb26b00d884739e780fb591ba29e9c66\" returns successfully" Dec 12 17:45:50.841028 containerd[1500]: time="2025-12-12T17:45:50.840936967Z" level=info msg="StartContainer for \"09a4d6c0a44084f020e3daaba09cd67f2bf472070df26c224827d34dff3b4f57\" returns successfully" Dec 12 17:45:50.868115 kubelet[2285]: I1212 17:45:50.868080 2285 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:45:50.868489 kubelet[2285]: E1212 17:45:50.868458 2285 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Dec 12 17:45:51.003158 kubelet[2285]: E1212 17:45:51.002916 2285 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:45:51.003975 kubelet[2285]: E1212 17:45:51.003943 2285 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:45:51.007078 kubelet[2285]: E1212 17:45:51.006918 2285 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:45:51.671109 kubelet[2285]: I1212 17:45:51.671075 2285 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:45:52.009560 kubelet[2285]: E1212 17:45:52.009464 2285 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:45:52.010341 kubelet[2285]: E1212 17:45:52.010035 2285 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:45:52.261291 kubelet[2285]: E1212 17:45:52.261171 2285 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:45:52.276289 kubelet[2285]: I1212 17:45:52.276245 2285 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:45:52.279002 kubelet[2285]: I1212 17:45:52.278805 2285 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:45:52.311579 kubelet[2285]: E1212 17:45:52.311547 2285 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:45:52.311927 kubelet[2285]: I1212 17:45:52.311722 2285 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:45:52.317254 kubelet[2285]: E1212 17:45:52.317221 2285 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:45:52.317254 kubelet[2285]: I1212 17:45:52.317247 2285 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:45:52.321424 kubelet[2285]: E1212 17:45:52.321392 2285 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:45:52.972263 kubelet[2285]: I1212 17:45:52.971957 2285 apiserver.go:52] "Watching apiserver" Dec 12 17:45:52.977569 kubelet[2285]: I1212 17:45:52.977547 2285 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:45:53.010483 kubelet[2285]: I1212 17:45:53.010228 2285 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:45:53.012226 kubelet[2285]: E1212 17:45:53.012048 2285 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:45:54.221110 systemd[1]: Reload requested from client PID 2558 ('systemctl') (unit session-7.scope)... Dec 12 17:45:54.221124 systemd[1]: Reloading... Dec 12 17:45:54.283781 zram_generator::config[2604]: No configuration found. Dec 12 17:45:54.449093 systemd[1]: Reloading finished in 227 ms. Dec 12 17:45:54.469863 kubelet[2285]: I1212 17:45:54.469799 2285 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:45:54.469923 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:45:54.488607 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:45:54.488883 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:45:54.488938 systemd[1]: kubelet.service: Consumed 1.559s CPU time, 127.9M memory peak. Dec 12 17:45:54.490465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:45:54.621373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:45:54.625196 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:45:54.659448 kubelet[2643]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:45:54.659448 kubelet[2643]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:45:54.659448 kubelet[2643]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:45:54.659775 kubelet[2643]: I1212 17:45:54.659507 2643 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:45:54.668826 kubelet[2643]: I1212 17:45:54.668415 2643 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 12 17:45:54.668826 kubelet[2643]: I1212 17:45:54.668442 2643 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:45:54.668826 kubelet[2643]: I1212 17:45:54.668685 2643 server.go:954] "Client rotation is on, will bootstrap in background" Dec 12 17:45:54.670184 kubelet[2643]: I1212 17:45:54.670158 2643 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 12 17:45:54.673000 kubelet[2643]: I1212 17:45:54.672973 2643 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:45:54.676219 kubelet[2643]: I1212 17:45:54.676200 2643 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:45:54.681090 kubelet[2643]: I1212 17:45:54.681007 2643 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 12 17:45:54.681520 kubelet[2643]: I1212 17:45:54.681472 2643 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:45:54.681684 kubelet[2643]: I1212 17:45:54.681499 2643 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:45:54.681766 kubelet[2643]: I1212 17:45:54.681695 2643 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:45:54.681766 kubelet[2643]: I1212 17:45:54.681706 2643 container_manager_linux.go:304] "Creating device plugin manager" Dec 12 17:45:54.681766 kubelet[2643]: I1212 17:45:54.681764 2643 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:45:54.681913 kubelet[2643]: I1212 17:45:54.681898 2643 kubelet.go:446] "Attempting to sync node with API server" Dec 12 17:45:54.681938 kubelet[2643]: I1212 17:45:54.681915 2643 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:45:54.681960 kubelet[2643]: I1212 17:45:54.681939 2643 kubelet.go:352] "Adding apiserver pod source" Dec 12 17:45:54.681960 kubelet[2643]: I1212 17:45:54.681950 2643 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:45:54.682699 kubelet[2643]: I1212 17:45:54.682663 2643 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:45:54.683166 kubelet[2643]: I1212 17:45:54.683141 2643 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 12 17:45:54.683671 kubelet[2643]: I1212 17:45:54.683638 2643 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 12 17:45:54.683735 kubelet[2643]: I1212 17:45:54.683675 2643 server.go:1287] "Started kubelet" Dec 12 17:45:54.684070 kubelet[2643]: I1212 17:45:54.684045 2643 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:45:54.684200 kubelet[2643]: I1212 17:45:54.684138 2643 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:45:54.684514 kubelet[2643]: I1212 17:45:54.684484 2643 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:45:54.685119 kubelet[2643]: I1212 17:45:54.685104 2643 server.go:479] "Adding debug handlers to kubelet server" Dec 12 17:45:54.686695 kubelet[2643]: I1212 17:45:54.686677 2643 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:45:54.686959 kubelet[2643]: I1212 17:45:54.686881 2643 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:45:54.687858 kubelet[2643]: E1212 17:45:54.687834 2643 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:45:54.687913 kubelet[2643]: I1212 17:45:54.687870 2643 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 12 17:45:54.691758 kubelet[2643]: I1212 17:45:54.688030 2643 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 12 17:45:54.691758 kubelet[2643]: I1212 17:45:54.688147 2643 reconciler.go:26] "Reconciler: start to sync state" Dec 12 17:45:54.691758 kubelet[2643]: E1212 17:45:54.690162 2643 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:45:54.691938 kubelet[2643]: I1212 17:45:54.691922 2643 factory.go:221] Registration of the systemd container factory successfully Dec 12 17:45:54.692110 kubelet[2643]: I1212 17:45:54.692083 2643 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:45:54.693174 kubelet[2643]: I1212 17:45:54.693156 2643 factory.go:221] Registration of the containerd container factory successfully Dec 12 17:45:54.697898 kubelet[2643]: I1212 17:45:54.697851 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 12 17:45:54.705495 kubelet[2643]: I1212 17:45:54.704975 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 12 17:45:54.705495 kubelet[2643]: I1212 17:45:54.705006 2643 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 12 17:45:54.705495 kubelet[2643]: I1212 17:45:54.705025 2643 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:45:54.705495 kubelet[2643]: I1212 17:45:54.705032 2643 kubelet.go:2382] "Starting kubelet main sync loop" Dec 12 17:45:54.705495 kubelet[2643]: E1212 17:45:54.705078 2643 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:45:54.738542 kubelet[2643]: I1212 17:45:54.738520 2643 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:45:54.738542 kubelet[2643]: I1212 17:45:54.738537 2643 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:45:54.738701 kubelet[2643]: I1212 17:45:54.738556 2643 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:45:54.738735 kubelet[2643]: I1212 17:45:54.738712 2643 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:45:54.738771 kubelet[2643]: I1212 17:45:54.738726 2643 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:45:54.739740 kubelet[2643]: I1212 17:45:54.739700 2643 policy_none.go:49] "None policy: Start" Dec 12 17:45:54.739740 kubelet[2643]: I1212 17:45:54.739730 2643 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 12 17:45:54.739740 kubelet[2643]: I1212 17:45:54.739753 2643 state_mem.go:35] "Initializing new in-memory state store" Dec 12 17:45:54.739882 kubelet[2643]: I1212 17:45:54.739865 2643 state_mem.go:75] "Updated machine memory state" Dec 12 17:45:54.744444 kubelet[2643]: I1212 17:45:54.744153 2643 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 12 17:45:54.744444 kubelet[2643]: I1212 17:45:54.744298 2643 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:45:54.744444 kubelet[2643]: I1212 17:45:54.744309 2643 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:45:54.744444 kubelet[2643]: I1212 17:45:54.744431 2643 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:45:54.745349 kubelet[2643]: E1212 17:45:54.745333 2643 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:45:54.806263 kubelet[2643]: I1212 17:45:54.806208 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:45:54.806403 kubelet[2643]: I1212 17:45:54.806380 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:45:54.806503 kubelet[2643]: I1212 17:45:54.806487 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:45:54.845722 kubelet[2643]: I1212 17:45:54.845681 2643 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:45:54.852095 kubelet[2643]: I1212 17:45:54.852062 2643 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:45:54.852190 kubelet[2643]: I1212 17:45:54.852174 2643 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:45:54.888949 kubelet[2643]: I1212 17:45:54.888898 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a68423804124305a9de061f38780871-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0a68423804124305a9de061f38780871\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:45:54.888949 kubelet[2643]: I1212 17:45:54.888941 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cfa35905cea47101e8693ad2b2d8f378-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cfa35905cea47101e8693ad2b2d8f378\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:45:54.889101 kubelet[2643]: I1212 17:45:54.888965 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cfa35905cea47101e8693ad2b2d8f378-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cfa35905cea47101e8693ad2b2d8f378\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:45:54.889101 kubelet[2643]: I1212 17:45:54.888986 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:45:54.889101 kubelet[2643]: I1212 17:45:54.889005 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:45:54.889101 kubelet[2643]: I1212 17:45:54.889023 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:45:54.889101 kubelet[2643]: I1212 17:45:54.889038 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cfa35905cea47101e8693ad2b2d8f378-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cfa35905cea47101e8693ad2b2d8f378\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:45:54.889202 kubelet[2643]: I1212 17:45:54.889052 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:45:54.889202 kubelet[2643]: I1212 17:45:54.889068 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55d9ac750f8c9141f337af8b08cf5c9d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"55d9ac750f8c9141f337af8b08cf5c9d\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:45:55.683182 kubelet[2643]: I1212 17:45:55.683143 2643 apiserver.go:52] "Watching apiserver" Dec 12 17:45:55.688702 kubelet[2643]: I1212 17:45:55.688673 2643 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 12 17:45:55.725685 kubelet[2643]: I1212 17:45:55.725654 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:45:55.726036 kubelet[2643]: I1212 17:45:55.726018 2643 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:45:55.732968 kubelet[2643]: E1212 17:45:55.732932 2643 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:45:55.734295 kubelet[2643]: E1212 17:45:55.734249 2643 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 17:45:55.759066 kubelet[2643]: I1212 17:45:55.758985 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.758967859 podStartE2EDuration="1.758967859s" podCreationTimestamp="2025-12-12 17:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:45:55.75172624 +0000 UTC m=+1.123749046" watchObservedRunningTime="2025-12-12 17:45:55.758967859 +0000 UTC m=+1.130990705" Dec 12 17:45:55.767205 kubelet[2643]: I1212 17:45:55.767149 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.7671326139999999 podStartE2EDuration="1.767132614s" podCreationTimestamp="2025-12-12 17:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:45:55.759226725 +0000 UTC m=+1.131249571" watchObservedRunningTime="2025-12-12 17:45:55.767132614 +0000 UTC m=+1.139155460" Dec 12 17:45:55.776005 kubelet[2643]: I1212 17:45:55.775967 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7759544379999999 podStartE2EDuration="1.775954438s" podCreationTimestamp="2025-12-12 17:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:45:55.767424628 +0000 UTC m=+1.139447474" watchObservedRunningTime="2025-12-12 17:45:55.775954438 +0000 UTC m=+1.147977244" Dec 12 17:46:01.024017 kubelet[2643]: I1212 17:46:01.023972 2643 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:46:01.024420 containerd[1500]: time="2025-12-12T17:46:01.024318587Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:46:01.025179 kubelet[2643]: I1212 17:46:01.024707 2643 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:46:01.970106 systemd[1]: Created slice kubepods-besteffort-pod23ef27a3_3031_40fa_aa7a_5110ddd96d7b.slice - libcontainer container kubepods-besteffort-pod23ef27a3_3031_40fa_aa7a_5110ddd96d7b.slice. Dec 12 17:46:02.046005 kubelet[2643]: I1212 17:46:02.045837 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/23ef27a3-3031-40fa-aa7a-5110ddd96d7b-xtables-lock\") pod \"kube-proxy-7vm4m\" (UID: \"23ef27a3-3031-40fa-aa7a-5110ddd96d7b\") " pod="kube-system/kube-proxy-7vm4m" Dec 12 17:46:02.046005 kubelet[2643]: I1212 17:46:02.045910 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lzht\" (UniqueName: \"kubernetes.io/projected/23ef27a3-3031-40fa-aa7a-5110ddd96d7b-kube-api-access-6lzht\") pod \"kube-proxy-7vm4m\" (UID: \"23ef27a3-3031-40fa-aa7a-5110ddd96d7b\") " pod="kube-system/kube-proxy-7vm4m" Dec 12 17:46:02.046005 kubelet[2643]: I1212 17:46:02.045971 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/23ef27a3-3031-40fa-aa7a-5110ddd96d7b-kube-proxy\") pod \"kube-proxy-7vm4m\" (UID: \"23ef27a3-3031-40fa-aa7a-5110ddd96d7b\") " pod="kube-system/kube-proxy-7vm4m" Dec 12 17:46:02.046005 kubelet[2643]: I1212 17:46:02.045994 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/23ef27a3-3031-40fa-aa7a-5110ddd96d7b-lib-modules\") pod \"kube-proxy-7vm4m\" (UID: \"23ef27a3-3031-40fa-aa7a-5110ddd96d7b\") " pod="kube-system/kube-proxy-7vm4m" Dec 12 17:46:02.080279 systemd[1]: Created slice kubepods-besteffort-pod0eecd9e5_2fb0_4760_8f76_3987c6174881.slice - libcontainer container kubepods-besteffort-pod0eecd9e5_2fb0_4760_8f76_3987c6174881.slice. Dec 12 17:46:02.147106 kubelet[2643]: I1212 17:46:02.146960 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9hwv\" (UniqueName: \"kubernetes.io/projected/0eecd9e5-2fb0-4760-8f76-3987c6174881-kube-api-access-n9hwv\") pod \"tigera-operator-7dcd859c48-cdvcv\" (UID: \"0eecd9e5-2fb0-4760-8f76-3987c6174881\") " pod="tigera-operator/tigera-operator-7dcd859c48-cdvcv" Dec 12 17:46:02.147106 kubelet[2643]: I1212 17:46:02.147026 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0eecd9e5-2fb0-4760-8f76-3987c6174881-var-lib-calico\") pod \"tigera-operator-7dcd859c48-cdvcv\" (UID: \"0eecd9e5-2fb0-4760-8f76-3987c6174881\") " pod="tigera-operator/tigera-operator-7dcd859c48-cdvcv" Dec 12 17:46:02.290505 containerd[1500]: time="2025-12-12T17:46:02.290447700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7vm4m,Uid:23ef27a3-3031-40fa-aa7a-5110ddd96d7b,Namespace:kube-system,Attempt:0,}" Dec 12 17:46:02.305730 containerd[1500]: time="2025-12-12T17:46:02.305670655Z" level=info msg="connecting to shim 1da3f56c029d2ce7bbd445fc6da7618c274bcd0f33e94dc7150c4bdc74fdbb33" address="unix:///run/containerd/s/5c432cde87b3357f7b29fd8937fbfbe8232c0d4425c995c361bdd5aea061c6c4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:46:02.335935 systemd[1]: Started cri-containerd-1da3f56c029d2ce7bbd445fc6da7618c274bcd0f33e94dc7150c4bdc74fdbb33.scope - libcontainer container 1da3f56c029d2ce7bbd445fc6da7618c274bcd0f33e94dc7150c4bdc74fdbb33. Dec 12 17:46:02.357381 containerd[1500]: time="2025-12-12T17:46:02.357338062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7vm4m,Uid:23ef27a3-3031-40fa-aa7a-5110ddd96d7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1da3f56c029d2ce7bbd445fc6da7618c274bcd0f33e94dc7150c4bdc74fdbb33\"" Dec 12 17:46:02.361752 containerd[1500]: time="2025-12-12T17:46:02.361712072Z" level=info msg="CreateContainer within sandbox \"1da3f56c029d2ce7bbd445fc6da7618c274bcd0f33e94dc7150c4bdc74fdbb33\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:46:02.374129 containerd[1500]: time="2025-12-12T17:46:02.374082512Z" level=info msg="Container 899d308e36ad3e35e3bebfc1b4e4f783367966ac3d137a54da80dd0b4390386e: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:46:02.380541 containerd[1500]: time="2025-12-12T17:46:02.380484929Z" level=info msg="CreateContainer within sandbox \"1da3f56c029d2ce7bbd445fc6da7618c274bcd0f33e94dc7150c4bdc74fdbb33\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"899d308e36ad3e35e3bebfc1b4e4f783367966ac3d137a54da80dd0b4390386e\"" Dec 12 17:46:02.382264 containerd[1500]: time="2025-12-12T17:46:02.381449714Z" level=info msg="StartContainer for \"899d308e36ad3e35e3bebfc1b4e4f783367966ac3d137a54da80dd0b4390386e\"" Dec 12 17:46:02.382957 containerd[1500]: time="2025-12-12T17:46:02.382930090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cdvcv,Uid:0eecd9e5-2fb0-4760-8f76-3987c6174881,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:46:02.383765 containerd[1500]: time="2025-12-12T17:46:02.383710117Z" level=info msg="connecting to shim 899d308e36ad3e35e3bebfc1b4e4f783367966ac3d137a54da80dd0b4390386e" address="unix:///run/containerd/s/5c432cde87b3357f7b29fd8937fbfbe8232c0d4425c995c361bdd5aea061c6c4" protocol=ttrpc version=3 Dec 12 17:46:02.403311 containerd[1500]: time="2025-12-12T17:46:02.403255522Z" level=info msg="connecting to shim 4e11810b6dc9055ffd1fe9ba9b87ad844633ba6d2de4deb5bb9f538de3aa6e68" address="unix:///run/containerd/s/1daa4e7d0738cb1d1828548c593111feb188f7ad3de7b4fbd3cc17577daffe27" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:46:02.403920 systemd[1]: Started cri-containerd-899d308e36ad3e35e3bebfc1b4e4f783367966ac3d137a54da80dd0b4390386e.scope - libcontainer container 899d308e36ad3e35e3bebfc1b4e4f783367966ac3d137a54da80dd0b4390386e. Dec 12 17:46:02.431935 systemd[1]: Started cri-containerd-4e11810b6dc9055ffd1fe9ba9b87ad844633ba6d2de4deb5bb9f538de3aa6e68.scope - libcontainer container 4e11810b6dc9055ffd1fe9ba9b87ad844633ba6d2de4deb5bb9f538de3aa6e68. Dec 12 17:46:02.469062 containerd[1500]: time="2025-12-12T17:46:02.468999783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cdvcv,Uid:0eecd9e5-2fb0-4760-8f76-3987c6174881,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4e11810b6dc9055ffd1fe9ba9b87ad844633ba6d2de4deb5bb9f538de3aa6e68\"" Dec 12 17:46:02.471631 containerd[1500]: time="2025-12-12T17:46:02.471597341Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:46:02.475028 containerd[1500]: time="2025-12-12T17:46:02.474806249Z" level=info msg="StartContainer for \"899d308e36ad3e35e3bebfc1b4e4f783367966ac3d137a54da80dd0b4390386e\" returns successfully" Dec 12 17:46:02.762691 kubelet[2643]: I1212 17:46:02.761884 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7vm4m" podStartSLOduration=1.761866704 podStartE2EDuration="1.761866704s" podCreationTimestamp="2025-12-12 17:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:46:02.753537678 +0000 UTC m=+8.125560524" watchObservedRunningTime="2025-12-12 17:46:02.761866704 +0000 UTC m=+8.133889550" Dec 12 17:46:03.629081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1067075452.mount: Deactivated successfully. Dec 12 17:46:03.931874 containerd[1500]: time="2025-12-12T17:46:03.931755966Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:03.932535 containerd[1500]: time="2025-12-12T17:46:03.932501274Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 12 17:46:03.933192 containerd[1500]: time="2025-12-12T17:46:03.933136345Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:03.935994 containerd[1500]: time="2025-12-12T17:46:03.935957782Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:03.936782 containerd[1500]: time="2025-12-12T17:46:03.936740770Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.465109589s" Dec 12 17:46:03.936920 containerd[1500]: time="2025-12-12T17:46:03.936785369Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:46:03.938693 containerd[1500]: time="2025-12-12T17:46:03.938665300Z" level=info msg="CreateContainer within sandbox \"4e11810b6dc9055ffd1fe9ba9b87ad844633ba6d2de4deb5bb9f538de3aa6e68\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:46:03.943938 containerd[1500]: time="2025-12-12T17:46:03.943903700Z" level=info msg="Container d806308fb9de68d483f5886fd4994301794974c246fa7b0659bf7f1ecdb24c06: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:46:03.953306 containerd[1500]: time="2025-12-12T17:46:03.953255118Z" level=info msg="CreateContainer within sandbox \"4e11810b6dc9055ffd1fe9ba9b87ad844633ba6d2de4deb5bb9f538de3aa6e68\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d806308fb9de68d483f5886fd4994301794974c246fa7b0659bf7f1ecdb24c06\"" Dec 12 17:46:03.953907 containerd[1500]: time="2025-12-12T17:46:03.953873948Z" level=info msg="StartContainer for \"d806308fb9de68d483f5886fd4994301794974c246fa7b0659bf7f1ecdb24c06\"" Dec 12 17:46:03.954798 containerd[1500]: time="2025-12-12T17:46:03.954736575Z" level=info msg="connecting to shim d806308fb9de68d483f5886fd4994301794974c246fa7b0659bf7f1ecdb24c06" address="unix:///run/containerd/s/1daa4e7d0738cb1d1828548c593111feb188f7ad3de7b4fbd3cc17577daffe27" protocol=ttrpc version=3 Dec 12 17:46:03.976951 systemd[1]: Started cri-containerd-d806308fb9de68d483f5886fd4994301794974c246fa7b0659bf7f1ecdb24c06.scope - libcontainer container d806308fb9de68d483f5886fd4994301794974c246fa7b0659bf7f1ecdb24c06. Dec 12 17:46:04.004896 containerd[1500]: time="2025-12-12T17:46:04.004861333Z" level=info msg="StartContainer for \"d806308fb9de68d483f5886fd4994301794974c246fa7b0659bf7f1ecdb24c06\" returns successfully" Dec 12 17:46:04.761263 kubelet[2643]: I1212 17:46:04.761122 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-cdvcv" podStartSLOduration=1.294687425 podStartE2EDuration="2.761103834s" podCreationTimestamp="2025-12-12 17:46:02 +0000 UTC" firstStartedPulling="2025-12-12 17:46:02.471154948 +0000 UTC m=+7.843177794" lastFinishedPulling="2025-12-12 17:46:03.937571357 +0000 UTC m=+9.309594203" observedRunningTime="2025-12-12 17:46:04.760961876 +0000 UTC m=+10.132984682" watchObservedRunningTime="2025-12-12 17:46:04.761103834 +0000 UTC m=+10.133126680" Dec 12 17:46:09.407127 kernel: hrtimer: interrupt took 18783031 ns Dec 12 17:46:09.496115 update_engine[1488]: I20251212 17:46:09.495783 1488 update_attempter.cc:509] Updating boot flags... Dec 12 17:46:09.551037 sudo[1712]: pam_unix(sudo:session): session closed for user root Dec 12 17:46:09.555451 sshd[1711]: Connection closed by 10.0.0.1 port 54044 Dec 12 17:46:09.558854 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Dec 12 17:46:09.568600 systemd[1]: sshd@6-10.0.0.124:22-10.0.0.1:54044.service: Deactivated successfully. Dec 12 17:46:09.572659 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:46:09.575402 systemd[1]: session-7.scope: Consumed 6.839s CPU time, 220M memory peak. Dec 12 17:46:09.606862 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:46:09.652775 systemd-logind[1483]: Removed session 7. Dec 12 17:46:17.655699 systemd[1]: Created slice kubepods-besteffort-pod7138eb00_4585_4375_968f_68b9937dab5c.slice - libcontainer container kubepods-besteffort-pod7138eb00_4585_4375_968f_68b9937dab5c.slice. Dec 12 17:46:17.751418 kubelet[2643]: I1212 17:46:17.751369 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw6h7\" (UniqueName: \"kubernetes.io/projected/7138eb00-4585-4375-968f-68b9937dab5c-kube-api-access-jw6h7\") pod \"calico-typha-675dbccfc7-vqrlr\" (UID: \"7138eb00-4585-4375-968f-68b9937dab5c\") " pod="calico-system/calico-typha-675dbccfc7-vqrlr" Dec 12 17:46:17.752847 kubelet[2643]: I1212 17:46:17.751428 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7138eb00-4585-4375-968f-68b9937dab5c-typha-certs\") pod \"calico-typha-675dbccfc7-vqrlr\" (UID: \"7138eb00-4585-4375-968f-68b9937dab5c\") " pod="calico-system/calico-typha-675dbccfc7-vqrlr" Dec 12 17:46:17.752847 kubelet[2643]: I1212 17:46:17.751449 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7138eb00-4585-4375-968f-68b9937dab5c-tigera-ca-bundle\") pod \"calico-typha-675dbccfc7-vqrlr\" (UID: \"7138eb00-4585-4375-968f-68b9937dab5c\") " pod="calico-system/calico-typha-675dbccfc7-vqrlr" Dec 12 17:46:17.839098 systemd[1]: Created slice kubepods-besteffort-pode7117263_2510_46d8_a3f8_7dc763c148e3.slice - libcontainer container kubepods-besteffort-pode7117263_2510_46d8_a3f8_7dc763c148e3.slice. Dec 12 17:46:17.852160 kubelet[2643]: I1212 17:46:17.852121 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7117263-2510-46d8-a3f8-7dc763c148e3-xtables-lock\") pod \"calico-node-9vdft\" (UID: \"e7117263-2510-46d8-a3f8-7dc763c148e3\") " pod="calico-system/calico-node-9vdft" Dec 12 17:46:17.852160 kubelet[2643]: I1212 17:46:17.852164 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kggdf\" (UniqueName: \"kubernetes.io/projected/e7117263-2510-46d8-a3f8-7dc763c148e3-kube-api-access-kggdf\") pod \"calico-node-9vdft\" (UID: \"e7117263-2510-46d8-a3f8-7dc763c148e3\") " pod="calico-system/calico-node-9vdft" Dec 12 17:46:17.852288 kubelet[2643]: I1212 17:46:17.852196 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e7117263-2510-46d8-a3f8-7dc763c148e3-flexvol-driver-host\") pod \"calico-node-9vdft\" (UID: \"e7117263-2510-46d8-a3f8-7dc763c148e3\") " pod="calico-system/calico-node-9vdft" Dec 12 17:46:17.852288 kubelet[2643]: I1212 17:46:17.852224 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7117263-2510-46d8-a3f8-7dc763c148e3-tigera-ca-bundle\") pod \"calico-node-9vdft\" (UID: \"e7117263-2510-46d8-a3f8-7dc763c148e3\") " pod="calico-system/calico-node-9vdft" Dec 12 17:46:17.852288 kubelet[2643]: I1212 17:46:17.852239 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7117263-2510-46d8-a3f8-7dc763c148e3-var-lib-calico\") pod \"calico-node-9vdft\" (UID: \"e7117263-2510-46d8-a3f8-7dc763c148e3\") " pod="calico-system/calico-node-9vdft" Dec 12 17:46:17.852288 kubelet[2643]: I1212 17:46:17.852254 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e7117263-2510-46d8-a3f8-7dc763c148e3-cni-net-dir\") pod \"calico-node-9vdft\" (UID: \"e7117263-2510-46d8-a3f8-7dc763c148e3\") " pod="calico-system/calico-node-9vdft" Dec 12 17:46:17.852288 kubelet[2643]: I1212 17:46:17.852272 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7117263-2510-46d8-a3f8-7dc763c148e3-lib-modules\") pod \"calico-node-9vdft\" (UID: \"e7117263-2510-46d8-a3f8-7dc763c148e3\") " pod="calico-system/calico-node-9vdft" Dec 12 17:46:17.852394 kubelet[2643]: I1212 17:46:17.852288 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e7117263-2510-46d8-a3f8-7dc763c148e3-cni-log-dir\") pod \"calico-node-9vdft\" (UID: \"e7117263-2510-46d8-a3f8-7dc763c148e3\") " pod="calico-system/calico-node-9vdft" Dec 12 17:46:17.852394 kubelet[2643]: I1212 17:46:17.852307 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e7117263-2510-46d8-a3f8-7dc763c148e3-policysync\") pod \"calico-node-9vdft\" (UID: \"e7117263-2510-46d8-a3f8-7dc763c148e3\") " pod="calico-system/calico-node-9vdft" Dec 12 17:46:17.852394 kubelet[2643]: I1212 17:46:17.852322 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e7117263-2510-46d8-a3f8-7dc763c148e3-cni-bin-dir\") pod \"calico-node-9vdft\" (UID: \"e7117263-2510-46d8-a3f8-7dc763c148e3\") " pod="calico-system/calico-node-9vdft" Dec 12 17:46:17.852394 kubelet[2643]: I1212 17:46:17.852335 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e7117263-2510-46d8-a3f8-7dc763c148e3-node-certs\") pod \"calico-node-9vdft\" (UID: \"e7117263-2510-46d8-a3f8-7dc763c148e3\") " pod="calico-system/calico-node-9vdft" Dec 12 17:46:17.852394 kubelet[2643]: I1212 17:46:17.852349 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e7117263-2510-46d8-a3f8-7dc763c148e3-var-run-calico\") pod \"calico-node-9vdft\" (UID: \"e7117263-2510-46d8-a3f8-7dc763c148e3\") " pod="calico-system/calico-node-9vdft" Dec 12 17:46:17.955018 kubelet[2643]: E1212 17:46:17.954916 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:17.955018 kubelet[2643]: W1212 17:46:17.954940 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:17.955018 kubelet[2643]: E1212 17:46:17.954983 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:17.957333 kubelet[2643]: E1212 17:46:17.957279 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:17.957333 kubelet[2643]: W1212 17:46:17.957297 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:17.957333 kubelet[2643]: E1212 17:46:17.957315 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:17.964098 containerd[1500]: time="2025-12-12T17:46:17.963706733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-675dbccfc7-vqrlr,Uid:7138eb00-4585-4375-968f-68b9937dab5c,Namespace:calico-system,Attempt:0,}" Dec 12 17:46:17.964660 kubelet[2643]: E1212 17:46:17.964635 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:17.964660 kubelet[2643]: W1212 17:46:17.964652 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:17.964764 kubelet[2643]: E1212 17:46:17.964667 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.013594 containerd[1500]: time="2025-12-12T17:46:18.013549915Z" level=info msg="connecting to shim 626c62f77b15e3b7100fb4824443975cdcda2c4d080a992785328423d925df84" address="unix:///run/containerd/s/e6caf8c58cb436974e4e47967e14e5145cd10ce55f2c3e9125a7e423f48a69ec" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:46:18.022609 kubelet[2643]: E1212 17:46:18.022253 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bpcft" podUID="d7faeb01-da6d-4b13-a976-32d40fd38bd7" Dec 12 17:46:18.031921 kubelet[2643]: E1212 17:46:18.031848 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.031921 kubelet[2643]: W1212 17:46:18.031867 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.031921 kubelet[2643]: E1212 17:46:18.031884 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.032306 kubelet[2643]: E1212 17:46:18.032223 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.032306 kubelet[2643]: W1212 17:46:18.032234 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.032306 kubelet[2643]: E1212 17:46:18.032273 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.032544 kubelet[2643]: E1212 17:46:18.032534 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.032606 kubelet[2643]: W1212 17:46:18.032595 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.032659 kubelet[2643]: E1212 17:46:18.032648 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.032932 kubelet[2643]: E1212 17:46:18.032868 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.032932 kubelet[2643]: W1212 17:46:18.032879 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.032932 kubelet[2643]: E1212 17:46:18.032889 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.035942 kubelet[2643]: E1212 17:46:18.035818 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.035942 kubelet[2643]: W1212 17:46:18.035837 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.035942 kubelet[2643]: E1212 17:46:18.035850 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.038353 kubelet[2643]: E1212 17:46:18.038333 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.038794 kubelet[2643]: W1212 17:46:18.038613 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.038794 kubelet[2643]: E1212 17:46:18.038637 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.039356 kubelet[2643]: E1212 17:46:18.039145 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.039521 kubelet[2643]: W1212 17:46:18.039423 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.039591 kubelet[2643]: E1212 17:46:18.039577 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.039946 kubelet[2643]: E1212 17:46:18.039832 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.039946 kubelet[2643]: W1212 17:46:18.039844 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.039946 kubelet[2643]: E1212 17:46:18.039855 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.040116 kubelet[2643]: E1212 17:46:18.040102 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.040171 kubelet[2643]: W1212 17:46:18.040161 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.040300 kubelet[2643]: E1212 17:46:18.040212 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.040458 kubelet[2643]: E1212 17:46:18.040446 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.040550 kubelet[2643]: W1212 17:46:18.040501 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.040550 kubelet[2643]: E1212 17:46:18.040515 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.040823 kubelet[2643]: E1212 17:46:18.040810 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.040977 kubelet[2643]: W1212 17:46:18.040877 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.040977 kubelet[2643]: E1212 17:46:18.040894 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.041140 kubelet[2643]: E1212 17:46:18.041127 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.041238 kubelet[2643]: W1212 17:46:18.041226 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.041292 kubelet[2643]: E1212 17:46:18.041282 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.041700 kubelet[2643]: E1212 17:46:18.041604 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.041700 kubelet[2643]: W1212 17:46:18.041616 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.041700 kubelet[2643]: E1212 17:46:18.041629 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.041891 kubelet[2643]: E1212 17:46:18.041878 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.041945 kubelet[2643]: W1212 17:46:18.041935 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.041996 kubelet[2643]: E1212 17:46:18.041987 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.042383 kubelet[2643]: E1212 17:46:18.042284 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.042383 kubelet[2643]: W1212 17:46:18.042296 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.042383 kubelet[2643]: E1212 17:46:18.042307 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.042549 kubelet[2643]: E1212 17:46:18.042516 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.042602 kubelet[2643]: W1212 17:46:18.042591 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.042656 kubelet[2643]: E1212 17:46:18.042646 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.043161 kubelet[2643]: E1212 17:46:18.043050 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.043161 kubelet[2643]: W1212 17:46:18.043064 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.043161 kubelet[2643]: E1212 17:46:18.043086 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.043326 kubelet[2643]: E1212 17:46:18.043313 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.043377 kubelet[2643]: W1212 17:46:18.043367 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.043428 kubelet[2643]: E1212 17:46:18.043419 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.043647 kubelet[2643]: E1212 17:46:18.043635 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.043707 kubelet[2643]: W1212 17:46:18.043697 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.043779 kubelet[2643]: E1212 17:46:18.043767 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.044049 kubelet[2643]: E1212 17:46:18.043961 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.044049 kubelet[2643]: W1212 17:46:18.043971 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.044049 kubelet[2643]: E1212 17:46:18.043980 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.054042 kubelet[2643]: E1212 17:46:18.054022 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.054042 kubelet[2643]: W1212 17:46:18.054041 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.054146 kubelet[2643]: E1212 17:46:18.054056 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.054146 kubelet[2643]: I1212 17:46:18.054093 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d7faeb01-da6d-4b13-a976-32d40fd38bd7-varrun\") pod \"csi-node-driver-bpcft\" (UID: \"d7faeb01-da6d-4b13-a976-32d40fd38bd7\") " pod="calico-system/csi-node-driver-bpcft" Dec 12 17:46:18.054834 kubelet[2643]: E1212 17:46:18.054814 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.054882 kubelet[2643]: W1212 17:46:18.054834 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.054882 kubelet[2643]: E1212 17:46:18.054854 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.054882 kubelet[2643]: I1212 17:46:18.054873 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d7faeb01-da6d-4b13-a976-32d40fd38bd7-registration-dir\") pod \"csi-node-driver-bpcft\" (UID: \"d7faeb01-da6d-4b13-a976-32d40fd38bd7\") " pod="calico-system/csi-node-driver-bpcft" Dec 12 17:46:18.055081 kubelet[2643]: E1212 17:46:18.055060 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.055081 kubelet[2643]: W1212 17:46:18.055081 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.055146 kubelet[2643]: E1212 17:46:18.055098 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.055146 kubelet[2643]: I1212 17:46:18.055115 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d7faeb01-da6d-4b13-a976-32d40fd38bd7-kubelet-dir\") pod \"csi-node-driver-bpcft\" (UID: \"d7faeb01-da6d-4b13-a976-32d40fd38bd7\") " pod="calico-system/csi-node-driver-bpcft" Dec 12 17:46:18.055286 kubelet[2643]: E1212 17:46:18.055270 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.055321 kubelet[2643]: W1212 17:46:18.055287 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.055321 kubelet[2643]: E1212 17:46:18.055302 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.055321 kubelet[2643]: I1212 17:46:18.055319 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qlrm\" (UniqueName: \"kubernetes.io/projected/d7faeb01-da6d-4b13-a976-32d40fd38bd7-kube-api-access-6qlrm\") pod \"csi-node-driver-bpcft\" (UID: \"d7faeb01-da6d-4b13-a976-32d40fd38bd7\") " pod="calico-system/csi-node-driver-bpcft" Dec 12 17:46:18.055474 kubelet[2643]: E1212 17:46:18.055462 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.055512 kubelet[2643]: W1212 17:46:18.055475 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.055624 kubelet[2643]: E1212 17:46:18.055544 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.055624 kubelet[2643]: I1212 17:46:18.055571 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d7faeb01-da6d-4b13-a976-32d40fd38bd7-socket-dir\") pod \"csi-node-driver-bpcft\" (UID: \"d7faeb01-da6d-4b13-a976-32d40fd38bd7\") " pod="calico-system/csi-node-driver-bpcft" Dec 12 17:46:18.055624 kubelet[2643]: E1212 17:46:18.055615 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.055624 kubelet[2643]: W1212 17:46:18.055623 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.055761 kubelet[2643]: E1212 17:46:18.055723 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.055761 kubelet[2643]: E1212 17:46:18.055756 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.055859 kubelet[2643]: W1212 17:46:18.055764 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.055859 kubelet[2643]: E1212 17:46:18.055779 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.055929 kubelet[2643]: E1212 17:46:18.055917 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.055929 kubelet[2643]: W1212 17:46:18.055928 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.056003 kubelet[2643]: E1212 17:46:18.055941 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.056091 kubelet[2643]: E1212 17:46:18.056078 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.056091 kubelet[2643]: W1212 17:46:18.056089 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.056161 kubelet[2643]: E1212 17:46:18.056103 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.056255 kubelet[2643]: E1212 17:46:18.056244 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.056255 kubelet[2643]: W1212 17:46:18.056254 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.056325 kubelet[2643]: E1212 17:46:18.056267 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.056611 kubelet[2643]: E1212 17:46:18.056596 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.056611 kubelet[2643]: W1212 17:46:18.056609 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.056756 kubelet[2643]: E1212 17:46:18.056620 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.056932 kubelet[2643]: E1212 17:46:18.056917 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.056932 kubelet[2643]: W1212 17:46:18.056931 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.057014 kubelet[2643]: E1212 17:46:18.056942 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.057203 kubelet[2643]: E1212 17:46:18.057188 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.057203 kubelet[2643]: W1212 17:46:18.057203 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.057254 kubelet[2643]: E1212 17:46:18.057214 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.057635 kubelet[2643]: E1212 17:46:18.057618 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.057635 kubelet[2643]: W1212 17:46:18.057633 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.057726 kubelet[2643]: E1212 17:46:18.057644 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.059919 kubelet[2643]: E1212 17:46:18.059900 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.059919 kubelet[2643]: W1212 17:46:18.059916 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.060016 kubelet[2643]: E1212 17:46:18.059932 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.069926 systemd[1]: Started cri-containerd-626c62f77b15e3b7100fb4824443975cdcda2c4d080a992785328423d925df84.scope - libcontainer container 626c62f77b15e3b7100fb4824443975cdcda2c4d080a992785328423d925df84. Dec 12 17:46:18.106513 containerd[1500]: time="2025-12-12T17:46:18.106469992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-675dbccfc7-vqrlr,Uid:7138eb00-4585-4375-968f-68b9937dab5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"626c62f77b15e3b7100fb4824443975cdcda2c4d080a992785328423d925df84\"" Dec 12 17:46:18.114109 containerd[1500]: time="2025-12-12T17:46:18.113978297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:46:18.142691 containerd[1500]: time="2025-12-12T17:46:18.142628726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9vdft,Uid:e7117263-2510-46d8-a3f8-7dc763c148e3,Namespace:calico-system,Attempt:0,}" Dec 12 17:46:18.156332 kubelet[2643]: E1212 17:46:18.156297 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.156332 kubelet[2643]: W1212 17:46:18.156328 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.156432 kubelet[2643]: E1212 17:46:18.156348 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.156595 kubelet[2643]: E1212 17:46:18.156573 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.156595 kubelet[2643]: W1212 17:46:18.156587 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.156668 kubelet[2643]: E1212 17:46:18.156598 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.156937 kubelet[2643]: E1212 17:46:18.156797 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.156937 kubelet[2643]: W1212 17:46:18.156810 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.156937 kubelet[2643]: E1212 17:46:18.156823 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.157225 kubelet[2643]: E1212 17:46:18.157019 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.157225 kubelet[2643]: W1212 17:46:18.157037 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.157225 kubelet[2643]: E1212 17:46:18.157057 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.158986 kubelet[2643]: E1212 17:46:18.158930 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.158986 kubelet[2643]: W1212 17:46:18.158946 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.158986 kubelet[2643]: E1212 17:46:18.158964 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.159160 kubelet[2643]: E1212 17:46:18.159136 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.159160 kubelet[2643]: W1212 17:46:18.159152 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.159226 kubelet[2643]: E1212 17:46:18.159173 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.159461 kubelet[2643]: E1212 17:46:18.159370 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.159461 kubelet[2643]: W1212 17:46:18.159385 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.159461 kubelet[2643]: E1212 17:46:18.159420 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.159583 kubelet[2643]: E1212 17:46:18.159565 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.159583 kubelet[2643]: W1212 17:46:18.159579 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.159767 kubelet[2643]: E1212 17:46:18.159610 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.159767 kubelet[2643]: E1212 17:46:18.159734 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.159819 kubelet[2643]: W1212 17:46:18.159741 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.159866 kubelet[2643]: E1212 17:46:18.159843 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.160007 kubelet[2643]: E1212 17:46:18.159992 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.160007 kubelet[2643]: W1212 17:46:18.160004 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.160072 kubelet[2643]: E1212 17:46:18.160022 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.160283 kubelet[2643]: E1212 17:46:18.160256 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.160283 kubelet[2643]: W1212 17:46:18.160269 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.160352 kubelet[2643]: E1212 17:46:18.160288 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.160465 kubelet[2643]: E1212 17:46:18.160452 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.160498 kubelet[2643]: W1212 17:46:18.160465 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.160554 kubelet[2643]: E1212 17:46:18.160540 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.161195 kubelet[2643]: E1212 17:46:18.161179 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.161195 kubelet[2643]: W1212 17:46:18.161194 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.161295 kubelet[2643]: E1212 17:46:18.161279 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.161558 kubelet[2643]: E1212 17:46:18.161543 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.161558 kubelet[2643]: W1212 17:46:18.161555 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.162635 kubelet[2643]: E1212 17:46:18.161670 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.162692 containerd[1500]: time="2025-12-12T17:46:18.161703066Z" level=info msg="connecting to shim 128b726b01e1e030566b6349e645e557f357fa5a01436e6da01c031eba4a3c83" address="unix:///run/containerd/s/bc85701b37c06f40005d8f94ce48abed634bf87898b11515aa5fd757538a5d90" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:46:18.162842 kubelet[2643]: E1212 17:46:18.162827 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.162842 kubelet[2643]: W1212 17:46:18.162842 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.162947 kubelet[2643]: E1212 17:46:18.162930 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.163331 kubelet[2643]: E1212 17:46:18.163264 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.163331 kubelet[2643]: W1212 17:46:18.163317 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.163458 kubelet[2643]: E1212 17:46:18.163384 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.163686 kubelet[2643]: E1212 17:46:18.163660 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.163686 kubelet[2643]: W1212 17:46:18.163674 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.163815 kubelet[2643]: E1212 17:46:18.163755 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.164062 kubelet[2643]: E1212 17:46:18.163933 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.164062 kubelet[2643]: W1212 17:46:18.163966 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.164144 kubelet[2643]: E1212 17:46:18.164084 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.164436 kubelet[2643]: E1212 17:46:18.164392 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.164436 kubelet[2643]: W1212 17:46:18.164411 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.164436 kubelet[2643]: E1212 17:46:18.164424 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.164675 kubelet[2643]: E1212 17:46:18.164659 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.164675 kubelet[2643]: W1212 17:46:18.164671 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.164721 kubelet[2643]: E1212 17:46:18.164714 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.165785 kubelet[2643]: E1212 17:46:18.164881 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.165785 kubelet[2643]: W1212 17:46:18.164889 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.165785 kubelet[2643]: E1212 17:46:18.164926 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.165785 kubelet[2643]: E1212 17:46:18.165217 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.165785 kubelet[2643]: W1212 17:46:18.165227 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.165785 kubelet[2643]: E1212 17:46:18.165241 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.165785 kubelet[2643]: E1212 17:46:18.165394 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.165785 kubelet[2643]: W1212 17:46:18.165403 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.165785 kubelet[2643]: E1212 17:46:18.165414 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.165785 kubelet[2643]: E1212 17:46:18.165608 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.165990 kubelet[2643]: W1212 17:46:18.165616 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.165990 kubelet[2643]: E1212 17:46:18.165631 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.165990 kubelet[2643]: E1212 17:46:18.165853 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.165990 kubelet[2643]: W1212 17:46:18.165863 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.165990 kubelet[2643]: E1212 17:46:18.165871 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.172232 kubelet[2643]: E1212 17:46:18.172211 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:18.172232 kubelet[2643]: W1212 17:46:18.172229 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:18.172305 kubelet[2643]: E1212 17:46:18.172244 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:18.184898 systemd[1]: Started cri-containerd-128b726b01e1e030566b6349e645e557f357fa5a01436e6da01c031eba4a3c83.scope - libcontainer container 128b726b01e1e030566b6349e645e557f357fa5a01436e6da01c031eba4a3c83. Dec 12 17:46:18.220050 containerd[1500]: time="2025-12-12T17:46:18.219931118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9vdft,Uid:e7117263-2510-46d8-a3f8-7dc763c148e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"128b726b01e1e030566b6349e645e557f357fa5a01436e6da01c031eba4a3c83\"" Dec 12 17:46:18.969333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707702497.mount: Deactivated successfully. Dec 12 17:46:19.667619 containerd[1500]: time="2025-12-12T17:46:19.667570519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:19.668564 containerd[1500]: time="2025-12-12T17:46:19.668410993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Dec 12 17:46:19.669409 containerd[1500]: time="2025-12-12T17:46:19.669373266Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:19.671726 containerd[1500]: time="2025-12-12T17:46:19.671688010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:19.672508 containerd[1500]: time="2025-12-12T17:46:19.672475004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.558455508s" Dec 12 17:46:19.672508 containerd[1500]: time="2025-12-12T17:46:19.672506644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:46:19.674849 containerd[1500]: time="2025-12-12T17:46:19.674819708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:46:19.700116 containerd[1500]: time="2025-12-12T17:46:19.700059290Z" level=info msg="CreateContainer within sandbox \"626c62f77b15e3b7100fb4824443975cdcda2c4d080a992785328423d925df84\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:46:19.705753 kubelet[2643]: E1212 17:46:19.705708 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bpcft" podUID="d7faeb01-da6d-4b13-a976-32d40fd38bd7" Dec 12 17:46:19.706774 containerd[1500]: time="2025-12-12T17:46:19.706474885Z" level=info msg="Container 7c7d8207aad881220c0739b8cce36b6d9ade4892676ce3f4c4777b583a461ce4: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:46:19.714318 containerd[1500]: time="2025-12-12T17:46:19.714286910Z" level=info msg="CreateContainer within sandbox \"626c62f77b15e3b7100fb4824443975cdcda2c4d080a992785328423d925df84\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7c7d8207aad881220c0739b8cce36b6d9ade4892676ce3f4c4777b583a461ce4\"" Dec 12 17:46:19.714687 containerd[1500]: time="2025-12-12T17:46:19.714661667Z" level=info msg="StartContainer for \"7c7d8207aad881220c0739b8cce36b6d9ade4892676ce3f4c4777b583a461ce4\"" Dec 12 17:46:19.715856 containerd[1500]: time="2025-12-12T17:46:19.715732020Z" level=info msg="connecting to shim 7c7d8207aad881220c0739b8cce36b6d9ade4892676ce3f4c4777b583a461ce4" address="unix:///run/containerd/s/e6caf8c58cb436974e4e47967e14e5145cd10ce55f2c3e9125a7e423f48a69ec" protocol=ttrpc version=3 Dec 12 17:46:19.748943 systemd[1]: Started cri-containerd-7c7d8207aad881220c0739b8cce36b6d9ade4892676ce3f4c4777b583a461ce4.scope - libcontainer container 7c7d8207aad881220c0739b8cce36b6d9ade4892676ce3f4c4777b583a461ce4. Dec 12 17:46:19.781570 containerd[1500]: time="2025-12-12T17:46:19.781534956Z" level=info msg="StartContainer for \"7c7d8207aad881220c0739b8cce36b6d9ade4892676ce3f4c4777b583a461ce4\" returns successfully" Dec 12 17:46:19.856762 kubelet[2643]: E1212 17:46:19.856712 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.857040 kubelet[2643]: W1212 17:46:19.856832 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.857040 kubelet[2643]: E1212 17:46:19.856855 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.857317 kubelet[2643]: E1212 17:46:19.857134 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.857399 kubelet[2643]: W1212 17:46:19.857300 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.857399 kubelet[2643]: E1212 17:46:19.857375 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.857642 kubelet[2643]: E1212 17:46:19.857621 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.857723 kubelet[2643]: W1212 17:46:19.857698 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.857871 kubelet[2643]: E1212 17:46:19.857772 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.858051 kubelet[2643]: E1212 17:46:19.858025 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.858139 kubelet[2643]: W1212 17:46:19.858127 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.858289 kubelet[2643]: E1212 17:46:19.858186 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.858437 kubelet[2643]: E1212 17:46:19.858425 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.858503 kubelet[2643]: W1212 17:46:19.858492 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.858570 kubelet[2643]: E1212 17:46:19.858548 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.858792 kubelet[2643]: E1212 17:46:19.858767 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.859022 kubelet[2643]: W1212 17:46:19.858822 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.859022 kubelet[2643]: E1212 17:46:19.858851 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.859289 kubelet[2643]: E1212 17:46:19.859213 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.859289 kubelet[2643]: W1212 17:46:19.859226 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.859289 kubelet[2643]: E1212 17:46:19.859236 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.859536 kubelet[2643]: E1212 17:46:19.859524 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.859755 kubelet[2643]: W1212 17:46:19.859592 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.859755 kubelet[2643]: E1212 17:46:19.859639 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.860138 kubelet[2643]: E1212 17:46:19.860055 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.860138 kubelet[2643]: W1212 17:46:19.860070 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.860138 kubelet[2643]: E1212 17:46:19.860082 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.860973 kubelet[2643]: E1212 17:46:19.860898 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.861350 kubelet[2643]: W1212 17:46:19.861093 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.861350 kubelet[2643]: E1212 17:46:19.861110 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.861733 kubelet[2643]: E1212 17:46:19.861631 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.862168 kubelet[2643]: W1212 17:46:19.861897 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.862168 kubelet[2643]: E1212 17:46:19.861915 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.862659 kubelet[2643]: E1212 17:46:19.862531 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.862874 kubelet[2643]: W1212 17:46:19.862826 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.863159 kubelet[2643]: E1212 17:46:19.863024 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.863435 kubelet[2643]: E1212 17:46:19.863421 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.863552 kubelet[2643]: W1212 17:46:19.863527 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.863607 kubelet[2643]: E1212 17:46:19.863597 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.864102 kubelet[2643]: E1212 17:46:19.864087 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.864210 kubelet[2643]: W1212 17:46:19.864197 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.864317 kubelet[2643]: E1212 17:46:19.864270 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.864615 kubelet[2643]: E1212 17:46:19.864544 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.864615 kubelet[2643]: W1212 17:46:19.864556 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.864615 kubelet[2643]: E1212 17:46:19.864566 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.875559 kubelet[2643]: E1212 17:46:19.875538 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.875559 kubelet[2643]: W1212 17:46:19.875555 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.875672 kubelet[2643]: E1212 17:46:19.875569 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.875830 kubelet[2643]: E1212 17:46:19.875817 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.875865 kubelet[2643]: W1212 17:46:19.875836 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.875865 kubelet[2643]: E1212 17:46:19.875854 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.876841 kubelet[2643]: E1212 17:46:19.876824 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.876841 kubelet[2643]: W1212 17:46:19.876840 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.877021 kubelet[2643]: E1212 17:46:19.876860 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.877171 kubelet[2643]: E1212 17:46:19.877152 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.877171 kubelet[2643]: W1212 17:46:19.877167 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.877291 kubelet[2643]: E1212 17:46:19.877200 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.877426 kubelet[2643]: E1212 17:46:19.877348 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.877426 kubelet[2643]: W1212 17:46:19.877362 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.877426 kubelet[2643]: E1212 17:46:19.877416 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.877834 kubelet[2643]: E1212 17:46:19.877532 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.877834 kubelet[2643]: W1212 17:46:19.877543 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.877834 kubelet[2643]: E1212 17:46:19.877583 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.877834 kubelet[2643]: E1212 17:46:19.877704 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.877834 kubelet[2643]: W1212 17:46:19.877713 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.877834 kubelet[2643]: E1212 17:46:19.877736 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.878252 kubelet[2643]: E1212 17:46:19.878233 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.878252 kubelet[2643]: W1212 17:46:19.878249 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.878320 kubelet[2643]: E1212 17:46:19.878270 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.878506 kubelet[2643]: E1212 17:46:19.878491 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.878506 kubelet[2643]: W1212 17:46:19.878504 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.878561 kubelet[2643]: E1212 17:46:19.878520 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.879024 kubelet[2643]: E1212 17:46:19.878999 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.879024 kubelet[2643]: W1212 17:46:19.879014 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.879123 kubelet[2643]: E1212 17:46:19.879095 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.879840 kubelet[2643]: E1212 17:46:19.879823 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.879840 kubelet[2643]: W1212 17:46:19.879837 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.879930 kubelet[2643]: E1212 17:46:19.879877 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.880200 kubelet[2643]: E1212 17:46:19.880186 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.880232 kubelet[2643]: W1212 17:46:19.880201 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.880264 kubelet[2643]: E1212 17:46:19.880234 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.880605 kubelet[2643]: E1212 17:46:19.880585 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.880605 kubelet[2643]: W1212 17:46:19.880599 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.880675 kubelet[2643]: E1212 17:46:19.880634 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.881390 kubelet[2643]: E1212 17:46:19.881369 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.881390 kubelet[2643]: W1212 17:46:19.881385 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.881515 kubelet[2643]: E1212 17:46:19.881403 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.881622 kubelet[2643]: E1212 17:46:19.881604 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.881622 kubelet[2643]: W1212 17:46:19.881616 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.881670 kubelet[2643]: E1212 17:46:19.881632 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.881911 kubelet[2643]: E1212 17:46:19.881895 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.881911 kubelet[2643]: W1212 17:46:19.881910 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.882633 kubelet[2643]: E1212 17:46:19.881928 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.882633 kubelet[2643]: E1212 17:46:19.882105 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.882633 kubelet[2643]: W1212 17:46:19.882115 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.882633 kubelet[2643]: E1212 17:46:19.882125 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:19.882633 kubelet[2643]: E1212 17:46:19.882257 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:19.882633 kubelet[2643]: W1212 17:46:19.882264 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:19.882633 kubelet[2643]: E1212 17:46:19.882272 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.738765 containerd[1500]: time="2025-12-12T17:46:20.738694300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:20.739983 containerd[1500]: time="2025-12-12T17:46:20.739413575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Dec 12 17:46:20.740184 containerd[1500]: time="2025-12-12T17:46:20.740147330Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:20.742844 containerd[1500]: time="2025-12-12T17:46:20.742814632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:20.743779 containerd[1500]: time="2025-12-12T17:46:20.743693266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.068838198s" Dec 12 17:46:20.743779 containerd[1500]: time="2025-12-12T17:46:20.743774585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:46:20.749297 containerd[1500]: time="2025-12-12T17:46:20.749266228Z" level=info msg="CreateContainer within sandbox \"128b726b01e1e030566b6349e645e557f357fa5a01436e6da01c031eba4a3c83\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:46:20.756898 containerd[1500]: time="2025-12-12T17:46:20.756866377Z" level=info msg="Container 241378605fbefdb16f770123a9456d991f200ded036426b34ad321db8643502d: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:46:20.763883 containerd[1500]: time="2025-12-12T17:46:20.763849970Z" level=info msg="CreateContainer within sandbox \"128b726b01e1e030566b6349e645e557f357fa5a01436e6da01c031eba4a3c83\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"241378605fbefdb16f770123a9456d991f200ded036426b34ad321db8643502d\"" Dec 12 17:46:20.764274 containerd[1500]: time="2025-12-12T17:46:20.764225247Z" level=info msg="StartContainer for \"241378605fbefdb16f770123a9456d991f200ded036426b34ad321db8643502d\"" Dec 12 17:46:20.765595 containerd[1500]: time="2025-12-12T17:46:20.765559518Z" level=info msg="connecting to shim 241378605fbefdb16f770123a9456d991f200ded036426b34ad321db8643502d" address="unix:///run/containerd/s/bc85701b37c06f40005d8f94ce48abed634bf87898b11515aa5fd757538a5d90" protocol=ttrpc version=3 Dec 12 17:46:20.789953 systemd[1]: Started cri-containerd-241378605fbefdb16f770123a9456d991f200ded036426b34ad321db8643502d.scope - libcontainer container 241378605fbefdb16f770123a9456d991f200ded036426b34ad321db8643502d. Dec 12 17:46:20.790756 kubelet[2643]: I1212 17:46:20.790641 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:46:20.872520 containerd[1500]: time="2025-12-12T17:46:20.872476515Z" level=info msg="StartContainer for \"241378605fbefdb16f770123a9456d991f200ded036426b34ad321db8643502d\" returns successfully" Dec 12 17:46:20.875292 kubelet[2643]: E1212 17:46:20.875268 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.875292 kubelet[2643]: W1212 17:46:20.875289 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.875453 kubelet[2643]: E1212 17:46:20.875308 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.875581 kubelet[2643]: E1212 17:46:20.875567 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.875657 kubelet[2643]: W1212 17:46:20.875581 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.875717 kubelet[2643]: E1212 17:46:20.875662 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.875906 kubelet[2643]: E1212 17:46:20.875894 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.876123 kubelet[2643]: W1212 17:46:20.875907 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.876123 kubelet[2643]: E1212 17:46:20.875917 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.876123 kubelet[2643]: E1212 17:46:20.876066 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.876123 kubelet[2643]: W1212 17:46:20.876075 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.876123 kubelet[2643]: E1212 17:46:20.876084 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.876275 kubelet[2643]: E1212 17:46:20.876217 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.876275 kubelet[2643]: W1212 17:46:20.876225 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.876275 kubelet[2643]: E1212 17:46:20.876233 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.876368 kubelet[2643]: E1212 17:46:20.876344 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.876368 kubelet[2643]: W1212 17:46:20.876355 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.876368 kubelet[2643]: E1212 17:46:20.876362 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.876488 kubelet[2643]: E1212 17:46:20.876476 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.876488 kubelet[2643]: W1212 17:46:20.876486 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.876555 kubelet[2643]: E1212 17:46:20.876493 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.876614 kubelet[2643]: E1212 17:46:20.876603 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.876614 kubelet[2643]: W1212 17:46:20.876613 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.876663 kubelet[2643]: E1212 17:46:20.876621 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.876889 kubelet[2643]: E1212 17:46:20.876875 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.876889 kubelet[2643]: W1212 17:46:20.876887 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.877027 kubelet[2643]: E1212 17:46:20.876897 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.877237 kubelet[2643]: E1212 17:46:20.877058 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.877237 kubelet[2643]: W1212 17:46:20.877067 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.877237 kubelet[2643]: E1212 17:46:20.877076 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.877237 kubelet[2643]: E1212 17:46:20.877196 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.877237 kubelet[2643]: W1212 17:46:20.877203 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.877237 kubelet[2643]: E1212 17:46:20.877210 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.877633 kubelet[2643]: E1212 17:46:20.877353 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.877633 kubelet[2643]: W1212 17:46:20.877362 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.877633 kubelet[2643]: E1212 17:46:20.877371 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.877633 kubelet[2643]: E1212 17:46:20.877517 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.877633 kubelet[2643]: W1212 17:46:20.877523 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.877633 kubelet[2643]: E1212 17:46:20.877530 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.878156 kubelet[2643]: E1212 17:46:20.877669 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.878156 kubelet[2643]: W1212 17:46:20.877676 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.878156 kubelet[2643]: E1212 17:46:20.877683 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.878156 kubelet[2643]: E1212 17:46:20.877816 2643 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:46:20.878156 kubelet[2643]: W1212 17:46:20.877822 2643 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:46:20.878156 kubelet[2643]: E1212 17:46:20.877830 2643 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:46:20.884109 systemd[1]: cri-containerd-241378605fbefdb16f770123a9456d991f200ded036426b34ad321db8643502d.scope: Deactivated successfully. Dec 12 17:46:20.919319 containerd[1500]: time="2025-12-12T17:46:20.919151079Z" level=info msg="received container exit event container_id:\"241378605fbefdb16f770123a9456d991f200ded036426b34ad321db8643502d\" id:\"241378605fbefdb16f770123a9456d991f200ded036426b34ad321db8643502d\" pid:3350 exited_at:{seconds:1765561580 nanos:909854942}" Dec 12 17:46:20.963962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-241378605fbefdb16f770123a9456d991f200ded036426b34ad321db8643502d-rootfs.mount: Deactivated successfully. Dec 12 17:46:21.706922 kubelet[2643]: E1212 17:46:21.706835 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bpcft" podUID="d7faeb01-da6d-4b13-a976-32d40fd38bd7" Dec 12 17:46:21.794455 containerd[1500]: time="2025-12-12T17:46:21.794418010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:46:21.808344 kubelet[2643]: I1212 17:46:21.808291 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-675dbccfc7-vqrlr" podStartSLOduration=3.244450732 podStartE2EDuration="4.80827716s" podCreationTimestamp="2025-12-12 17:46:17 +0000 UTC" firstStartedPulling="2025-12-12 17:46:18.10951025 +0000 UTC m=+23.481533096" lastFinishedPulling="2025-12-12 17:46:19.673336678 +0000 UTC m=+25.045359524" observedRunningTime="2025-12-12 17:46:19.798169519 +0000 UTC m=+25.170192365" watchObservedRunningTime="2025-12-12 17:46:21.80827716 +0000 UTC m=+27.180300006" Dec 12 17:46:23.705835 kubelet[2643]: E1212 17:46:23.705793 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bpcft" podUID="d7faeb01-da6d-4b13-a976-32d40fd38bd7" Dec 12 17:46:25.224703 containerd[1500]: time="2025-12-12T17:46:25.224660961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:25.225526 containerd[1500]: time="2025-12-12T17:46:25.225277798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 12 17:46:25.226685 containerd[1500]: time="2025-12-12T17:46:25.226624990Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:25.228704 containerd[1500]: time="2025-12-12T17:46:25.228674419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:25.229425 containerd[1500]: time="2025-12-12T17:46:25.229341375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.434866885s" Dec 12 17:46:25.229425 containerd[1500]: time="2025-12-12T17:46:25.229373895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:46:25.233142 containerd[1500]: time="2025-12-12T17:46:25.232466477Z" level=info msg="CreateContainer within sandbox \"128b726b01e1e030566b6349e645e557f357fa5a01436e6da01c031eba4a3c83\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:46:25.241776 containerd[1500]: time="2025-12-12T17:46:25.241601906Z" level=info msg="Container f349857f71fa9721de0632e5cbaa57134954ef85b10322ff996a5bc596023869: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:46:25.249030 containerd[1500]: time="2025-12-12T17:46:25.248994385Z" level=info msg="CreateContainer within sandbox \"128b726b01e1e030566b6349e645e557f357fa5a01436e6da01c031eba4a3c83\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f349857f71fa9721de0632e5cbaa57134954ef85b10322ff996a5bc596023869\"" Dec 12 17:46:25.249360 containerd[1500]: time="2025-12-12T17:46:25.249341023Z" level=info msg="StartContainer for \"f349857f71fa9721de0632e5cbaa57134954ef85b10322ff996a5bc596023869\"" Dec 12 17:46:25.250927 containerd[1500]: time="2025-12-12T17:46:25.250903374Z" level=info msg="connecting to shim f349857f71fa9721de0632e5cbaa57134954ef85b10322ff996a5bc596023869" address="unix:///run/containerd/s/bc85701b37c06f40005d8f94ce48abed634bf87898b11515aa5fd757538a5d90" protocol=ttrpc version=3 Dec 12 17:46:25.270916 systemd[1]: Started cri-containerd-f349857f71fa9721de0632e5cbaa57134954ef85b10322ff996a5bc596023869.scope - libcontainer container f349857f71fa9721de0632e5cbaa57134954ef85b10322ff996a5bc596023869. Dec 12 17:46:25.335179 containerd[1500]: time="2025-12-12T17:46:25.335063104Z" level=info msg="StartContainer for \"f349857f71fa9721de0632e5cbaa57134954ef85b10322ff996a5bc596023869\" returns successfully" Dec 12 17:46:25.706020 kubelet[2643]: E1212 17:46:25.705910 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bpcft" podUID="d7faeb01-da6d-4b13-a976-32d40fd38bd7" Dec 12 17:46:25.916180 systemd[1]: cri-containerd-f349857f71fa9721de0632e5cbaa57134954ef85b10322ff996a5bc596023869.scope: Deactivated successfully. Dec 12 17:46:25.916441 systemd[1]: cri-containerd-f349857f71fa9721de0632e5cbaa57134954ef85b10322ff996a5bc596023869.scope: Consumed 456ms CPU time, 177.4M memory peak, 2.7M read from disk, 165.9M written to disk. Dec 12 17:46:25.917303 containerd[1500]: time="2025-12-12T17:46:25.917270530Z" level=info msg="received container exit event container_id:\"f349857f71fa9721de0632e5cbaa57134954ef85b10322ff996a5bc596023869\" id:\"f349857f71fa9721de0632e5cbaa57134954ef85b10322ff996a5bc596023869\" pid:3427 exited_at:{seconds:1765561585 nanos:916461214}" Dec 12 17:46:25.938074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f349857f71fa9721de0632e5cbaa57134954ef85b10322ff996a5bc596023869-rootfs.mount: Deactivated successfully. Dec 12 17:46:25.964416 kubelet[2643]: I1212 17:46:25.964326 2643 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 12 17:46:26.029706 systemd[1]: Created slice kubepods-burstable-pod3b481ea6_5954_4cd3_8bf4_dd4b768ccfaa.slice - libcontainer container kubepods-burstable-pod3b481ea6_5954_4cd3_8bf4_dd4b768ccfaa.slice. Dec 12 17:46:26.041363 systemd[1]: Created slice kubepods-besteffort-pod62a80cbc_f140_42e5_895b_a84f9cbc2fab.slice - libcontainer container kubepods-besteffort-pod62a80cbc_f140_42e5_895b_a84f9cbc2fab.slice. Dec 12 17:46:26.048591 systemd[1]: Created slice kubepods-besteffort-pod904021b1_af08_4d93_b11e_a0cb970db885.slice - libcontainer container kubepods-besteffort-pod904021b1_af08_4d93_b11e_a0cb970db885.slice. Dec 12 17:46:26.056252 systemd[1]: Created slice kubepods-burstable-pod0cf10505_a996_4f57_b08c_e0c0817270b9.slice - libcontainer container kubepods-burstable-pod0cf10505_a996_4f57_b08c_e0c0817270b9.slice. Dec 12 17:46:26.073940 systemd[1]: Created slice kubepods-besteffort-pod371f3370_3973_43fa_a344_e5a8edf40083.slice - libcontainer container kubepods-besteffort-pod371f3370_3973_43fa_a344_e5a8edf40083.slice. Dec 12 17:46:26.081298 systemd[1]: Created slice kubepods-besteffort-pod34789b1f_3cfd_4180_8a83_14eb6215f63c.slice - libcontainer container kubepods-besteffort-pod34789b1f_3cfd_4180_8a83_14eb6215f63c.slice. Dec 12 17:46:26.087818 systemd[1]: Created slice kubepods-besteffort-pod4ea33c98_ef15_472e_9c45_f0753d746e85.slice - libcontainer container kubepods-besteffort-pod4ea33c98_ef15_472e_9c45_f0753d746e85.slice. Dec 12 17:46:26.127687 kubelet[2643]: I1212 17:46:26.127632 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4ea33c98-ef15-472e-9c45-f0753d746e85-whisker-backend-key-pair\") pod \"whisker-6d7dc778-64zq5\" (UID: \"4ea33c98-ef15-472e-9c45-f0753d746e85\") " pod="calico-system/whisker-6d7dc778-64zq5" Dec 12 17:46:26.127687 kubelet[2643]: I1212 17:46:26.127680 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/904021b1-af08-4d93-b11e-a0cb970db885-calico-apiserver-certs\") pod \"calico-apiserver-67b7547c5c-fcbnc\" (UID: \"904021b1-af08-4d93-b11e-a0cb970db885\") " pod="calico-apiserver/calico-apiserver-67b7547c5c-fcbnc" Dec 12 17:46:26.127873 kubelet[2643]: I1212 17:46:26.127703 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrqfh\" (UniqueName: \"kubernetes.io/projected/371f3370-3973-43fa-a344-e5a8edf40083-kube-api-access-wrqfh\") pod \"goldmane-666569f655-g9s5b\" (UID: \"371f3370-3973-43fa-a344-e5a8edf40083\") " pod="calico-system/goldmane-666569f655-g9s5b" Dec 12 17:46:26.127873 kubelet[2643]: I1212 17:46:26.127723 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cf10505-a996-4f57-b08c-e0c0817270b9-config-volume\") pod \"coredns-668d6bf9bc-k5kkz\" (UID: \"0cf10505-a996-4f57-b08c-e0c0817270b9\") " pod="kube-system/coredns-668d6bf9bc-k5kkz" Dec 12 17:46:26.127873 kubelet[2643]: I1212 17:46:26.127741 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa-config-volume\") pod \"coredns-668d6bf9bc-7rtzv\" (UID: \"3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa\") " pod="kube-system/coredns-668d6bf9bc-7rtzv" Dec 12 17:46:26.127873 kubelet[2643]: I1212 17:46:26.127777 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2dth\" (UniqueName: \"kubernetes.io/projected/0cf10505-a996-4f57-b08c-e0c0817270b9-kube-api-access-p2dth\") pod \"coredns-668d6bf9bc-k5kkz\" (UID: \"0cf10505-a996-4f57-b08c-e0c0817270b9\") " pod="kube-system/coredns-668d6bf9bc-k5kkz" Dec 12 17:46:26.127873 kubelet[2643]: I1212 17:46:26.127794 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr8z6\" (UniqueName: \"kubernetes.io/projected/3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa-kube-api-access-gr8z6\") pod \"coredns-668d6bf9bc-7rtzv\" (UID: \"3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa\") " pod="kube-system/coredns-668d6bf9bc-7rtzv" Dec 12 17:46:26.128185 kubelet[2643]: I1212 17:46:26.127810 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdf99\" (UniqueName: \"kubernetes.io/projected/34789b1f-3cfd-4180-8a83-14eb6215f63c-kube-api-access-vdf99\") pod \"calico-apiserver-67b7547c5c-sb784\" (UID: \"34789b1f-3cfd-4180-8a83-14eb6215f63c\") " pod="calico-apiserver/calico-apiserver-67b7547c5c-sb784" Dec 12 17:46:26.128185 kubelet[2643]: I1212 17:46:26.127828 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ea33c98-ef15-472e-9c45-f0753d746e85-whisker-ca-bundle\") pod \"whisker-6d7dc778-64zq5\" (UID: \"4ea33c98-ef15-472e-9c45-f0753d746e85\") " pod="calico-system/whisker-6d7dc778-64zq5" Dec 12 17:46:26.128185 kubelet[2643]: I1212 17:46:26.127845 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8qct\" (UniqueName: \"kubernetes.io/projected/4ea33c98-ef15-472e-9c45-f0753d746e85-kube-api-access-v8qct\") pod \"whisker-6d7dc778-64zq5\" (UID: \"4ea33c98-ef15-472e-9c45-f0753d746e85\") " pod="calico-system/whisker-6d7dc778-64zq5" Dec 12 17:46:26.128185 kubelet[2643]: I1212 17:46:26.127864 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62a80cbc-f140-42e5-895b-a84f9cbc2fab-tigera-ca-bundle\") pod \"calico-kube-controllers-6bc564446d-p8ff5\" (UID: \"62a80cbc-f140-42e5-895b-a84f9cbc2fab\") " pod="calico-system/calico-kube-controllers-6bc564446d-p8ff5" Dec 12 17:46:26.128185 kubelet[2643]: I1212 17:46:26.127880 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47qrp\" (UniqueName: \"kubernetes.io/projected/62a80cbc-f140-42e5-895b-a84f9cbc2fab-kube-api-access-47qrp\") pod \"calico-kube-controllers-6bc564446d-p8ff5\" (UID: \"62a80cbc-f140-42e5-895b-a84f9cbc2fab\") " pod="calico-system/calico-kube-controllers-6bc564446d-p8ff5" Dec 12 17:46:26.128299 kubelet[2643]: I1212 17:46:26.127902 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/34789b1f-3cfd-4180-8a83-14eb6215f63c-calico-apiserver-certs\") pod \"calico-apiserver-67b7547c5c-sb784\" (UID: \"34789b1f-3cfd-4180-8a83-14eb6215f63c\") " pod="calico-apiserver/calico-apiserver-67b7547c5c-sb784" Dec 12 17:46:26.128299 kubelet[2643]: I1212 17:46:26.127922 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/371f3370-3973-43fa-a344-e5a8edf40083-config\") pod \"goldmane-666569f655-g9s5b\" (UID: \"371f3370-3973-43fa-a344-e5a8edf40083\") " pod="calico-system/goldmane-666569f655-g9s5b" Dec 12 17:46:26.128299 kubelet[2643]: I1212 17:46:26.127979 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/371f3370-3973-43fa-a344-e5a8edf40083-goldmane-key-pair\") pod \"goldmane-666569f655-g9s5b\" (UID: \"371f3370-3973-43fa-a344-e5a8edf40083\") " pod="calico-system/goldmane-666569f655-g9s5b" Dec 12 17:46:26.128299 kubelet[2643]: I1212 17:46:26.128036 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckqtn\" (UniqueName: \"kubernetes.io/projected/904021b1-af08-4d93-b11e-a0cb970db885-kube-api-access-ckqtn\") pod \"calico-apiserver-67b7547c5c-fcbnc\" (UID: \"904021b1-af08-4d93-b11e-a0cb970db885\") " pod="calico-apiserver/calico-apiserver-67b7547c5c-fcbnc" Dec 12 17:46:26.128299 kubelet[2643]: I1212 17:46:26.128080 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/371f3370-3973-43fa-a344-e5a8edf40083-goldmane-ca-bundle\") pod \"goldmane-666569f655-g9s5b\" (UID: \"371f3370-3973-43fa-a344-e5a8edf40083\") " pod="calico-system/goldmane-666569f655-g9s5b" Dec 12 17:46:26.335621 containerd[1500]: time="2025-12-12T17:46:26.335585216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7rtzv,Uid:3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa,Namespace:kube-system,Attempt:0,}" Dec 12 17:46:26.348657 containerd[1500]: time="2025-12-12T17:46:26.348527786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bc564446d-p8ff5,Uid:62a80cbc-f140-42e5-895b-a84f9cbc2fab,Namespace:calico-system,Attempt:0,}" Dec 12 17:46:26.352805 containerd[1500]: time="2025-12-12T17:46:26.352773923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7547c5c-fcbnc,Uid:904021b1-af08-4d93-b11e-a0cb970db885,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:46:26.370420 containerd[1500]: time="2025-12-12T17:46:26.370330829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k5kkz,Uid:0cf10505-a996-4f57-b08c-e0c0817270b9,Namespace:kube-system,Attempt:0,}" Dec 12 17:46:26.377149 containerd[1500]: time="2025-12-12T17:46:26.377058232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g9s5b,Uid:371f3370-3973-43fa-a344-e5a8edf40083,Namespace:calico-system,Attempt:0,}" Dec 12 17:46:26.388117 containerd[1500]: time="2025-12-12T17:46:26.387602975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7547c5c-sb784,Uid:34789b1f-3cfd-4180-8a83-14eb6215f63c,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:46:26.390953 containerd[1500]: time="2025-12-12T17:46:26.390916998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d7dc778-64zq5,Uid:4ea33c98-ef15-472e-9c45-f0753d746e85,Namespace:calico-system,Attempt:0,}" Dec 12 17:46:26.443931 containerd[1500]: time="2025-12-12T17:46:26.443852392Z" level=error msg="Failed to destroy network for sandbox \"c9f99089fd8716dc254a124d7d20357bcfeff1166fc1f958dc5db28202e6176f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.446813 containerd[1500]: time="2025-12-12T17:46:26.446726456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7547c5c-fcbnc,Uid:904021b1-af08-4d93-b11e-a0cb970db885,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9f99089fd8716dc254a124d7d20357bcfeff1166fc1f958dc5db28202e6176f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.450322 containerd[1500]: time="2025-12-12T17:46:26.450222758Z" level=error msg="Failed to destroy network for sandbox \"e58f54a44ca65fad0833be3ea94d761ef0c17511f78bf6ab8b0ad79fd0cd3bae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.450689 kubelet[2643]: E1212 17:46:26.450608 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9f99089fd8716dc254a124d7d20357bcfeff1166fc1f958dc5db28202e6176f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.451443 containerd[1500]: time="2025-12-12T17:46:26.451397031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bc564446d-p8ff5,Uid:62a80cbc-f140-42e5-895b-a84f9cbc2fab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58f54a44ca65fad0833be3ea94d761ef0c17511f78bf6ab8b0ad79fd0cd3bae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.453590 kubelet[2643]: E1212 17:46:26.453432 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58f54a44ca65fad0833be3ea94d761ef0c17511f78bf6ab8b0ad79fd0cd3bae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.453590 kubelet[2643]: E1212 17:46:26.453548 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58f54a44ca65fad0833be3ea94d761ef0c17511f78bf6ab8b0ad79fd0cd3bae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bc564446d-p8ff5" Dec 12 17:46:26.453590 kubelet[2643]: E1212 17:46:26.453571 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58f54a44ca65fad0833be3ea94d761ef0c17511f78bf6ab8b0ad79fd0cd3bae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bc564446d-p8ff5" Dec 12 17:46:26.453728 kubelet[2643]: E1212 17:46:26.453634 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bc564446d-p8ff5_calico-system(62a80cbc-f140-42e5-895b-a84f9cbc2fab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bc564446d-p8ff5_calico-system(62a80cbc-f140-42e5-895b-a84f9cbc2fab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e58f54a44ca65fad0833be3ea94d761ef0c17511f78bf6ab8b0ad79fd0cd3bae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bc564446d-p8ff5" podUID="62a80cbc-f140-42e5-895b-a84f9cbc2fab" Dec 12 17:46:26.453728 kubelet[2643]: E1212 17:46:26.453649 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9f99089fd8716dc254a124d7d20357bcfeff1166fc1f958dc5db28202e6176f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b7547c5c-fcbnc" Dec 12 17:46:26.453728 kubelet[2643]: E1212 17:46:26.453692 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9f99089fd8716dc254a124d7d20357bcfeff1166fc1f958dc5db28202e6176f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b7547c5c-fcbnc" Dec 12 17:46:26.453871 kubelet[2643]: E1212 17:46:26.453736 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67b7547c5c-fcbnc_calico-apiserver(904021b1-af08-4d93-b11e-a0cb970db885)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67b7547c5c-fcbnc_calico-apiserver(904021b1-af08-4d93-b11e-a0cb970db885)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9f99089fd8716dc254a124d7d20357bcfeff1166fc1f958dc5db28202e6176f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b7547c5c-fcbnc" podUID="904021b1-af08-4d93-b11e-a0cb970db885" Dec 12 17:46:26.465938 containerd[1500]: time="2025-12-12T17:46:26.465891393Z" level=error msg="Failed to destroy network for sandbox \"a9d5ba8432dd0558d6e91d78f5bd099086c198e0e5f71d15e200b33d33ca75c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.467190 containerd[1500]: time="2025-12-12T17:46:26.467086507Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7rtzv,Uid:3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9d5ba8432dd0558d6e91d78f5bd099086c198e0e5f71d15e200b33d33ca75c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.467385 kubelet[2643]: E1212 17:46:26.467343 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9d5ba8432dd0558d6e91d78f5bd099086c198e0e5f71d15e200b33d33ca75c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.467436 kubelet[2643]: E1212 17:46:26.467397 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9d5ba8432dd0558d6e91d78f5bd099086c198e0e5f71d15e200b33d33ca75c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7rtzv" Dec 12 17:46:26.467436 kubelet[2643]: E1212 17:46:26.467417 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9d5ba8432dd0558d6e91d78f5bd099086c198e0e5f71d15e200b33d33ca75c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7rtzv" Dec 12 17:46:26.467555 kubelet[2643]: E1212 17:46:26.467465 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7rtzv_kube-system(3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7rtzv_kube-system(3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9d5ba8432dd0558d6e91d78f5bd099086c198e0e5f71d15e200b33d33ca75c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7rtzv" podUID="3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa" Dec 12 17:46:26.498350 containerd[1500]: time="2025-12-12T17:46:26.498268938Z" level=error msg="Failed to destroy network for sandbox \"fdcce8b6a49140382674f58e88b10987ba02066b7dfc921d342bd7db2c66f284\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.499296 containerd[1500]: time="2025-12-12T17:46:26.499255853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k5kkz,Uid:0cf10505-a996-4f57-b08c-e0c0817270b9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdcce8b6a49140382674f58e88b10987ba02066b7dfc921d342bd7db2c66f284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.499907 kubelet[2643]: E1212 17:46:26.499862 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdcce8b6a49140382674f58e88b10987ba02066b7dfc921d342bd7db2c66f284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.500095 kubelet[2643]: E1212 17:46:26.499928 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdcce8b6a49140382674f58e88b10987ba02066b7dfc921d342bd7db2c66f284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k5kkz" Dec 12 17:46:26.500095 kubelet[2643]: E1212 17:46:26.499949 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdcce8b6a49140382674f58e88b10987ba02066b7dfc921d342bd7db2c66f284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k5kkz" Dec 12 17:46:26.500095 kubelet[2643]: E1212 17:46:26.500004 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k5kkz_kube-system(0cf10505-a996-4f57-b08c-e0c0817270b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k5kkz_kube-system(0cf10505-a996-4f57-b08c-e0c0817270b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdcce8b6a49140382674f58e88b10987ba02066b7dfc921d342bd7db2c66f284\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k5kkz" podUID="0cf10505-a996-4f57-b08c-e0c0817270b9" Dec 12 17:46:26.504103 containerd[1500]: time="2025-12-12T17:46:26.504067187Z" level=error msg="Failed to destroy network for sandbox \"1e766950da4b5430ee44c1eefda4725c19f0121d3c14f95aac19bb34b295956b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.504376 containerd[1500]: time="2025-12-12T17:46:26.504350945Z" level=error msg="Failed to destroy network for sandbox \"381b844d77e45ef5e146b1c9db7de4e016216658047efc7619a5b2a2cad4eb52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.505343 containerd[1500]: time="2025-12-12T17:46:26.505203261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g9s5b,Uid:371f3370-3973-43fa-a344-e5a8edf40083,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e766950da4b5430ee44c1eefda4725c19f0121d3c14f95aac19bb34b295956b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.505612 kubelet[2643]: E1212 17:46:26.505581 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e766950da4b5430ee44c1eefda4725c19f0121d3c14f95aac19bb34b295956b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.505656 kubelet[2643]: E1212 17:46:26.505638 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e766950da4b5430ee44c1eefda4725c19f0121d3c14f95aac19bb34b295956b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g9s5b" Dec 12 17:46:26.505706 kubelet[2643]: E1212 17:46:26.505656 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e766950da4b5430ee44c1eefda4725c19f0121d3c14f95aac19bb34b295956b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-g9s5b" Dec 12 17:46:26.505740 kubelet[2643]: E1212 17:46:26.505711 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-g9s5b_calico-system(371f3370-3973-43fa-a344-e5a8edf40083)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-g9s5b_calico-system(371f3370-3973-43fa-a344-e5a8edf40083)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e766950da4b5430ee44c1eefda4725c19f0121d3c14f95aac19bb34b295956b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-g9s5b" podUID="371f3370-3973-43fa-a344-e5a8edf40083" Dec 12 17:46:26.506270 containerd[1500]: time="2025-12-12T17:46:26.506226295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7547c5c-sb784,Uid:34789b1f-3cfd-4180-8a83-14eb6215f63c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"381b844d77e45ef5e146b1c9db7de4e016216658047efc7619a5b2a2cad4eb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.507158 kubelet[2643]: E1212 17:46:26.506381 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"381b844d77e45ef5e146b1c9db7de4e016216658047efc7619a5b2a2cad4eb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.507158 kubelet[2643]: E1212 17:46:26.506428 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"381b844d77e45ef5e146b1c9db7de4e016216658047efc7619a5b2a2cad4eb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b7547c5c-sb784" Dec 12 17:46:26.507158 kubelet[2643]: E1212 17:46:26.506443 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"381b844d77e45ef5e146b1c9db7de4e016216658047efc7619a5b2a2cad4eb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b7547c5c-sb784" Dec 12 17:46:26.507286 kubelet[2643]: E1212 17:46:26.506471 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67b7547c5c-sb784_calico-apiserver(34789b1f-3cfd-4180-8a83-14eb6215f63c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67b7547c5c-sb784_calico-apiserver(34789b1f-3cfd-4180-8a83-14eb6215f63c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"381b844d77e45ef5e146b1c9db7de4e016216658047efc7619a5b2a2cad4eb52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b7547c5c-sb784" podUID="34789b1f-3cfd-4180-8a83-14eb6215f63c" Dec 12 17:46:26.515170 containerd[1500]: time="2025-12-12T17:46:26.515134527Z" level=error msg="Failed to destroy network for sandbox \"7b3a40ab9f99b2f13b0ab4c64c3ff1c86921ccd0ec9bf901a1158fb13382efb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.516053 containerd[1500]: time="2025-12-12T17:46:26.516019882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d7dc778-64zq5,Uid:4ea33c98-ef15-472e-9c45-f0753d746e85,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3a40ab9f99b2f13b0ab4c64c3ff1c86921ccd0ec9bf901a1158fb13382efb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.516219 kubelet[2643]: E1212 17:46:26.516178 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3a40ab9f99b2f13b0ab4c64c3ff1c86921ccd0ec9bf901a1158fb13382efb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:26.516274 kubelet[2643]: E1212 17:46:26.516229 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3a40ab9f99b2f13b0ab4c64c3ff1c86921ccd0ec9bf901a1158fb13382efb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d7dc778-64zq5" Dec 12 17:46:26.516274 kubelet[2643]: E1212 17:46:26.516247 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3a40ab9f99b2f13b0ab4c64c3ff1c86921ccd0ec9bf901a1158fb13382efb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d7dc778-64zq5" Dec 12 17:46:26.516316 kubelet[2643]: E1212 17:46:26.516296 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d7dc778-64zq5_calico-system(4ea33c98-ef15-472e-9c45-f0753d746e85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d7dc778-64zq5_calico-system(4ea33c98-ef15-472e-9c45-f0753d746e85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b3a40ab9f99b2f13b0ab4c64c3ff1c86921ccd0ec9bf901a1158fb13382efb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d7dc778-64zq5" podUID="4ea33c98-ef15-472e-9c45-f0753d746e85" Dec 12 17:46:26.818031 containerd[1500]: time="2025-12-12T17:46:26.817934613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:46:27.713731 systemd[1]: Created slice kubepods-besteffort-podd7faeb01_da6d_4b13_a976_32d40fd38bd7.slice - libcontainer container kubepods-besteffort-podd7faeb01_da6d_4b13_a976_32d40fd38bd7.slice. Dec 12 17:46:27.716173 containerd[1500]: time="2025-12-12T17:46:27.716142255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bpcft,Uid:d7faeb01-da6d-4b13-a976-32d40fd38bd7,Namespace:calico-system,Attempt:0,}" Dec 12 17:46:27.768072 containerd[1500]: time="2025-12-12T17:46:27.768025425Z" level=error msg="Failed to destroy network for sandbox \"cd300a6a0b1e447168152609b29d70db8f939ad47e225e32ae0181629f45ee51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:27.772270 systemd[1]: run-netns-cni\x2dba06d70a\x2d0872\x2d70c4\x2df7e6\x2d31ce41d18e85.mount: Deactivated successfully. Dec 12 17:46:27.775272 containerd[1500]: time="2025-12-12T17:46:27.775217867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bpcft,Uid:d7faeb01-da6d-4b13-a976-32d40fd38bd7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd300a6a0b1e447168152609b29d70db8f939ad47e225e32ae0181629f45ee51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:27.775542 kubelet[2643]: E1212 17:46:27.775502 2643 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd300a6a0b1e447168152609b29d70db8f939ad47e225e32ae0181629f45ee51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:46:27.776299 kubelet[2643]: E1212 17:46:27.775868 2643 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd300a6a0b1e447168152609b29d70db8f939ad47e225e32ae0181629f45ee51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bpcft" Dec 12 17:46:27.776299 kubelet[2643]: E1212 17:46:27.775898 2643 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd300a6a0b1e447168152609b29d70db8f939ad47e225e32ae0181629f45ee51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bpcft" Dec 12 17:46:27.776299 kubelet[2643]: E1212 17:46:27.775945 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bpcft_calico-system(d7faeb01-da6d-4b13-a976-32d40fd38bd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bpcft_calico-system(d7faeb01-da6d-4b13-a976-32d40fd38bd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd300a6a0b1e447168152609b29d70db8f939ad47e225e32ae0181629f45ee51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bpcft" podUID="d7faeb01-da6d-4b13-a976-32d40fd38bd7" Dec 12 17:46:30.817315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488050945.mount: Deactivated successfully. Dec 12 17:46:30.849422 containerd[1500]: time="2025-12-12T17:46:30.849380536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:30.850486 containerd[1500]: time="2025-12-12T17:46:30.850418011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 12 17:46:30.851583 containerd[1500]: time="2025-12-12T17:46:30.851543365Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:30.853807 containerd[1500]: time="2025-12-12T17:46:30.853050798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:46:30.853807 containerd[1500]: time="2025-12-12T17:46:30.853680875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.035657022s" Dec 12 17:46:30.853807 containerd[1500]: time="2025-12-12T17:46:30.853702675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:46:30.863174 containerd[1500]: time="2025-12-12T17:46:30.862103595Z" level=info msg="CreateContainer within sandbox \"128b726b01e1e030566b6349e645e557f357fa5a01436e6da01c031eba4a3c83\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:46:30.879798 containerd[1500]: time="2025-12-12T17:46:30.878985075Z" level=info msg="Container 4436d4b389ab7832d7b2aeccde1f9e57ebc2067075d7ffe98f04e900a8d3b3ce: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:46:30.888836 containerd[1500]: time="2025-12-12T17:46:30.888793469Z" level=info msg="CreateContainer within sandbox \"128b726b01e1e030566b6349e645e557f357fa5a01436e6da01c031eba4a3c83\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4436d4b389ab7832d7b2aeccde1f9e57ebc2067075d7ffe98f04e900a8d3b3ce\"" Dec 12 17:46:30.889344 containerd[1500]: time="2025-12-12T17:46:30.889302146Z" level=info msg="StartContainer for \"4436d4b389ab7832d7b2aeccde1f9e57ebc2067075d7ffe98f04e900a8d3b3ce\"" Dec 12 17:46:30.892821 containerd[1500]: time="2025-12-12T17:46:30.892731130Z" level=info msg="connecting to shim 4436d4b389ab7832d7b2aeccde1f9e57ebc2067075d7ffe98f04e900a8d3b3ce" address="unix:///run/containerd/s/bc85701b37c06f40005d8f94ce48abed634bf87898b11515aa5fd757538a5d90" protocol=ttrpc version=3 Dec 12 17:46:30.913935 systemd[1]: Started cri-containerd-4436d4b389ab7832d7b2aeccde1f9e57ebc2067075d7ffe98f04e900a8d3b3ce.scope - libcontainer container 4436d4b389ab7832d7b2aeccde1f9e57ebc2067075d7ffe98f04e900a8d3b3ce. Dec 12 17:46:30.991153 containerd[1500]: time="2025-12-12T17:46:30.991118664Z" level=info msg="StartContainer for \"4436d4b389ab7832d7b2aeccde1f9e57ebc2067075d7ffe98f04e900a8d3b3ce\" returns successfully" Dec 12 17:46:31.116782 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:46:31.117008 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:46:31.364095 kubelet[2643]: I1212 17:46:31.364042 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ea33c98-ef15-472e-9c45-f0753d746e85-whisker-ca-bundle\") pod \"4ea33c98-ef15-472e-9c45-f0753d746e85\" (UID: \"4ea33c98-ef15-472e-9c45-f0753d746e85\") " Dec 12 17:46:31.364095 kubelet[2643]: I1212 17:46:31.364101 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4ea33c98-ef15-472e-9c45-f0753d746e85-whisker-backend-key-pair\") pod \"4ea33c98-ef15-472e-9c45-f0753d746e85\" (UID: \"4ea33c98-ef15-472e-9c45-f0753d746e85\") " Dec 12 17:46:31.364530 kubelet[2643]: I1212 17:46:31.364127 2643 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8qct\" (UniqueName: \"kubernetes.io/projected/4ea33c98-ef15-472e-9c45-f0753d746e85-kube-api-access-v8qct\") pod \"4ea33c98-ef15-472e-9c45-f0753d746e85\" (UID: \"4ea33c98-ef15-472e-9c45-f0753d746e85\") " Dec 12 17:46:31.365662 kubelet[2643]: I1212 17:46:31.365591 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ea33c98-ef15-472e-9c45-f0753d746e85-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4ea33c98-ef15-472e-9c45-f0753d746e85" (UID: "4ea33c98-ef15-472e-9c45-f0753d746e85"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:46:31.368292 kubelet[2643]: I1212 17:46:31.368254 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea33c98-ef15-472e-9c45-f0753d746e85-kube-api-access-v8qct" (OuterVolumeSpecName: "kube-api-access-v8qct") pod "4ea33c98-ef15-472e-9c45-f0753d746e85" (UID: "4ea33c98-ef15-472e-9c45-f0753d746e85"). InnerVolumeSpecName "kube-api-access-v8qct". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:46:31.368366 kubelet[2643]: I1212 17:46:31.368348 2643 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea33c98-ef15-472e-9c45-f0753d746e85-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4ea33c98-ef15-472e-9c45-f0753d746e85" (UID: "4ea33c98-ef15-472e-9c45-f0753d746e85"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:46:31.465099 kubelet[2643]: I1212 17:46:31.464690 2643 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4ea33c98-ef15-472e-9c45-f0753d746e85-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 12 17:46:31.465099 kubelet[2643]: I1212 17:46:31.464723 2643 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v8qct\" (UniqueName: \"kubernetes.io/projected/4ea33c98-ef15-472e-9c45-f0753d746e85-kube-api-access-v8qct\") on node \"localhost\" DevicePath \"\"" Dec 12 17:46:31.466042 kubelet[2643]: I1212 17:46:31.466021 2643 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ea33c98-ef15-472e-9c45-f0753d746e85-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 12 17:46:31.818203 systemd[1]: var-lib-kubelet-pods-4ea33c98\x2def15\x2d472e\x2d9c45\x2df0753d746e85-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv8qct.mount: Deactivated successfully. Dec 12 17:46:31.818295 systemd[1]: var-lib-kubelet-pods-4ea33c98\x2def15\x2d472e\x2d9c45\x2df0753d746e85-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:46:31.833479 systemd[1]: Removed slice kubepods-besteffort-pod4ea33c98_ef15_472e_9c45_f0753d746e85.slice - libcontainer container kubepods-besteffort-pod4ea33c98_ef15_472e_9c45_f0753d746e85.slice. Dec 12 17:46:31.847119 kubelet[2643]: I1212 17:46:31.847013 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9vdft" podStartSLOduration=2.215939627 podStartE2EDuration="14.846993566s" podCreationTimestamp="2025-12-12 17:46:17 +0000 UTC" firstStartedPulling="2025-12-12 17:46:18.223432892 +0000 UTC m=+23.595455698" lastFinishedPulling="2025-12-12 17:46:30.854486831 +0000 UTC m=+36.226509637" observedRunningTime="2025-12-12 17:46:31.845802772 +0000 UTC m=+37.217825658" watchObservedRunningTime="2025-12-12 17:46:31.846993566 +0000 UTC m=+37.219016412" Dec 12 17:46:31.904791 systemd[1]: Created slice kubepods-besteffort-poda0e276bf_1b01_48a0_beed_8d097ca9c5c7.slice - libcontainer container kubepods-besteffort-poda0e276bf_1b01_48a0_beed_8d097ca9c5c7.slice. Dec 12 17:46:31.969448 kubelet[2643]: I1212 17:46:31.969406 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a0e276bf-1b01-48a0-beed-8d097ca9c5c7-whisker-backend-key-pair\") pod \"whisker-79876b7468-t2596\" (UID: \"a0e276bf-1b01-48a0-beed-8d097ca9c5c7\") " pod="calico-system/whisker-79876b7468-t2596" Dec 12 17:46:31.969579 kubelet[2643]: I1212 17:46:31.969473 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0e276bf-1b01-48a0-beed-8d097ca9c5c7-whisker-ca-bundle\") pod \"whisker-79876b7468-t2596\" (UID: \"a0e276bf-1b01-48a0-beed-8d097ca9c5c7\") " pod="calico-system/whisker-79876b7468-t2596" Dec 12 17:46:31.969579 kubelet[2643]: I1212 17:46:31.969533 2643 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhvd2\" (UniqueName: \"kubernetes.io/projected/a0e276bf-1b01-48a0-beed-8d097ca9c5c7-kube-api-access-xhvd2\") pod \"whisker-79876b7468-t2596\" (UID: \"a0e276bf-1b01-48a0-beed-8d097ca9c5c7\") " pod="calico-system/whisker-79876b7468-t2596" Dec 12 17:46:32.209821 containerd[1500]: time="2025-12-12T17:46:32.209430046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79876b7468-t2596,Uid:a0e276bf-1b01-48a0-beed-8d097ca9c5c7,Namespace:calico-system,Attempt:0,}" Dec 12 17:46:32.372967 systemd-networkd[1415]: calieaf9e11c160: Link UP Dec 12 17:46:32.373130 systemd-networkd[1415]: calieaf9e11c160: Gained carrier Dec 12 17:46:32.386448 containerd[1500]: 2025-12-12 17:46:32.233 [INFO][3818] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:46:32.386448 containerd[1500]: 2025-12-12 17:46:32.264 [INFO][3818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--79876b7468--t2596-eth0 whisker-79876b7468- calico-system a0e276bf-1b01-48a0-beed-8d097ca9c5c7 853 0 2025-12-12 17:46:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79876b7468 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-79876b7468-t2596 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calieaf9e11c160 [] [] }} ContainerID="007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" Namespace="calico-system" Pod="whisker-79876b7468-t2596" WorkloadEndpoint="localhost-k8s-whisker--79876b7468--t2596-" Dec 12 17:46:32.386448 containerd[1500]: 2025-12-12 17:46:32.264 [INFO][3818] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" Namespace="calico-system" Pod="whisker-79876b7468-t2596" WorkloadEndpoint="localhost-k8s-whisker--79876b7468--t2596-eth0" Dec 12 17:46:32.386448 containerd[1500]: 2025-12-12 17:46:32.327 [INFO][3834] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" HandleID="k8s-pod-network.007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" Workload="localhost-k8s-whisker--79876b7468--t2596-eth0" Dec 12 17:46:32.386968 containerd[1500]: 2025-12-12 17:46:32.327 [INFO][3834] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" HandleID="k8s-pod-network.007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" Workload="localhost-k8s-whisker--79876b7468--t2596-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058b450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-79876b7468-t2596", "timestamp":"2025-12-12 17:46:32.327014761 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:46:32.386968 containerd[1500]: 2025-12-12 17:46:32.327 [INFO][3834] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:46:32.386968 containerd[1500]: 2025-12-12 17:46:32.327 [INFO][3834] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:46:32.386968 containerd[1500]: 2025-12-12 17:46:32.327 [INFO][3834] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:46:32.386968 containerd[1500]: 2025-12-12 17:46:32.337 [INFO][3834] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" host="localhost" Dec 12 17:46:32.386968 containerd[1500]: 2025-12-12 17:46:32.344 [INFO][3834] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:46:32.386968 containerd[1500]: 2025-12-12 17:46:32.348 [INFO][3834] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:46:32.386968 containerd[1500]: 2025-12-12 17:46:32.351 [INFO][3834] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:32.386968 containerd[1500]: 2025-12-12 17:46:32.353 [INFO][3834] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:32.386968 containerd[1500]: 2025-12-12 17:46:32.353 [INFO][3834] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" host="localhost" Dec 12 17:46:32.388428 containerd[1500]: 2025-12-12 17:46:32.355 [INFO][3834] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc Dec 12 17:46:32.388428 containerd[1500]: 2025-12-12 17:46:32.358 [INFO][3834] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" host="localhost" Dec 12 17:46:32.388428 containerd[1500]: 2025-12-12 17:46:32.364 [INFO][3834] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" host="localhost" Dec 12 17:46:32.388428 containerd[1500]: 2025-12-12 17:46:32.364 [INFO][3834] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" host="localhost" Dec 12 17:46:32.388428 containerd[1500]: 2025-12-12 17:46:32.364 [INFO][3834] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:46:32.388428 containerd[1500]: 2025-12-12 17:46:32.364 [INFO][3834] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" HandleID="k8s-pod-network.007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" Workload="localhost-k8s-whisker--79876b7468--t2596-eth0" Dec 12 17:46:32.388545 containerd[1500]: 2025-12-12 17:46:32.367 [INFO][3818] cni-plugin/k8s.go 418: Populated endpoint ContainerID="007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" Namespace="calico-system" Pod="whisker-79876b7468-t2596" WorkloadEndpoint="localhost-k8s-whisker--79876b7468--t2596-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79876b7468--t2596-eth0", GenerateName:"whisker-79876b7468-", Namespace:"calico-system", SelfLink:"", UID:"a0e276bf-1b01-48a0-beed-8d097ca9c5c7", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79876b7468", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-79876b7468-t2596", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieaf9e11c160", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:32.388545 containerd[1500]: 2025-12-12 17:46:32.367 [INFO][3818] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" Namespace="calico-system" Pod="whisker-79876b7468-t2596" WorkloadEndpoint="localhost-k8s-whisker--79876b7468--t2596-eth0" Dec 12 17:46:32.388622 containerd[1500]: 2025-12-12 17:46:32.367 [INFO][3818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieaf9e11c160 ContainerID="007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" Namespace="calico-system" Pod="whisker-79876b7468-t2596" WorkloadEndpoint="localhost-k8s-whisker--79876b7468--t2596-eth0" Dec 12 17:46:32.388622 containerd[1500]: 2025-12-12 17:46:32.373 [INFO][3818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" Namespace="calico-system" Pod="whisker-79876b7468-t2596" WorkloadEndpoint="localhost-k8s-whisker--79876b7468--t2596-eth0" Dec 12 17:46:32.388661 containerd[1500]: 2025-12-12 17:46:32.373 [INFO][3818] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" Namespace="calico-system" Pod="whisker-79876b7468-t2596" WorkloadEndpoint="localhost-k8s-whisker--79876b7468--t2596-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79876b7468--t2596-eth0", GenerateName:"whisker-79876b7468-", Namespace:"calico-system", SelfLink:"", UID:"a0e276bf-1b01-48a0-beed-8d097ca9c5c7", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79876b7468", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc", Pod:"whisker-79876b7468-t2596", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieaf9e11c160", MAC:"de:c8:39:58:c4:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:32.388712 containerd[1500]: 2025-12-12 17:46:32.383 [INFO][3818] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" Namespace="calico-system" Pod="whisker-79876b7468-t2596" WorkloadEndpoint="localhost-k8s-whisker--79876b7468--t2596-eth0" Dec 12 17:46:32.448256 containerd[1500]: time="2025-12-12T17:46:32.448212860Z" level=info msg="connecting to shim 007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc" address="unix:///run/containerd/s/5d518052296d42465e6e6cda68f4b6c789ea9a523057f3c687b30f06552b5498" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:46:32.478982 systemd[1]: Started cri-containerd-007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc.scope - libcontainer container 007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc. Dec 12 17:46:32.507914 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:46:32.558944 containerd[1500]: time="2025-12-12T17:46:32.558819485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79876b7468-t2596,Uid:a0e276bf-1b01-48a0-beed-8d097ca9c5c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"007cab51284e33df798b6adf5593f806ab8cb610a11bb7d2abece431e8811ccc\"" Dec 12 17:46:32.565343 containerd[1500]: time="2025-12-12T17:46:32.565282576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:46:32.708211 kubelet[2643]: I1212 17:46:32.708171 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea33c98-ef15-472e-9c45-f0753d746e85" path="/var/lib/kubelet/pods/4ea33c98-ef15-472e-9c45-f0753d746e85/volumes" Dec 12 17:46:32.776990 containerd[1500]: time="2025-12-12T17:46:32.776906031Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:32.777789 containerd[1500]: time="2025-12-12T17:46:32.777719187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:46:32.777949 containerd[1500]: time="2025-12-12T17:46:32.777776347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:46:32.778023 kubelet[2643]: E1212 17:46:32.777986 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:46:32.778064 kubelet[2643]: E1212 17:46:32.778037 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:46:32.782582 kubelet[2643]: E1212 17:46:32.782029 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9b543fe8bed04a25a90bb20d8dd34946,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhvd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79876b7468-t2596_calico-system(a0e276bf-1b01-48a0-beed-8d097ca9c5c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:32.783968 containerd[1500]: time="2025-12-12T17:46:32.783938879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:46:32.983323 containerd[1500]: time="2025-12-12T17:46:32.983271109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:32.984241 containerd[1500]: time="2025-12-12T17:46:32.984184505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:46:32.984276 containerd[1500]: time="2025-12-12T17:46:32.984265864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:46:32.984466 kubelet[2643]: E1212 17:46:32.984418 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:46:32.984466 kubelet[2643]: E1212 17:46:32.984465 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:46:32.984612 kubelet[2643]: E1212 17:46:32.984563 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhvd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79876b7468-t2596_calico-system(a0e276bf-1b01-48a0-beed-8d097ca9c5c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:32.985991 kubelet[2643]: E1212 17:46:32.985929 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79876b7468-t2596" podUID="a0e276bf-1b01-48a0-beed-8d097ca9c5c7" Dec 12 17:46:33.837640 kubelet[2643]: E1212 17:46:33.837583 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79876b7468-t2596" podUID="a0e276bf-1b01-48a0-beed-8d097ca9c5c7" Dec 12 17:46:34.114929 systemd-networkd[1415]: calieaf9e11c160: Gained IPv6LL Dec 12 17:46:35.703815 systemd[1]: Started sshd@7-10.0.0.124:22-10.0.0.1:33694.service - OpenSSH per-connection server daemon (10.0.0.1:33694). Dec 12 17:46:35.772281 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 33694 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:46:35.773592 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:46:35.778350 systemd-logind[1483]: New session 8 of user core. Dec 12 17:46:35.785927 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:46:35.941703 sshd[4109]: Connection closed by 10.0.0.1 port 33694 Dec 12 17:46:35.942242 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Dec 12 17:46:35.945584 systemd[1]: sshd@7-10.0.0.124:22-10.0.0.1:33694.service: Deactivated successfully. Dec 12 17:46:35.947236 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:46:35.949339 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:46:35.950486 systemd-logind[1483]: Removed session 8. Dec 12 17:46:38.706296 containerd[1500]: time="2025-12-12T17:46:38.706239236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g9s5b,Uid:371f3370-3973-43fa-a344-e5a8edf40083,Namespace:calico-system,Attempt:0,}" Dec 12 17:46:38.806316 systemd-networkd[1415]: cali17a920beebe: Link UP Dec 12 17:46:38.806523 systemd-networkd[1415]: cali17a920beebe: Gained carrier Dec 12 17:46:38.821464 containerd[1500]: 2025-12-12 17:46:38.734 [INFO][4197] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:46:38.821464 containerd[1500]: 2025-12-12 17:46:38.748 [INFO][4197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--g9s5b-eth0 goldmane-666569f655- calico-system 371f3370-3973-43fa-a344-e5a8edf40083 796 0 2025-12-12 17:46:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-g9s5b eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali17a920beebe [] [] }} ContainerID="9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" Namespace="calico-system" Pod="goldmane-666569f655-g9s5b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g9s5b-" Dec 12 17:46:38.821464 containerd[1500]: 2025-12-12 17:46:38.748 [INFO][4197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" Namespace="calico-system" Pod="goldmane-666569f655-g9s5b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g9s5b-eth0" Dec 12 17:46:38.821464 containerd[1500]: 2025-12-12 17:46:38.772 [INFO][4211] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" HandleID="k8s-pod-network.9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" Workload="localhost-k8s-goldmane--666569f655--g9s5b-eth0" Dec 12 17:46:38.821665 containerd[1500]: 2025-12-12 17:46:38.773 [INFO][4211] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" HandleID="k8s-pod-network.9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" Workload="localhost-k8s-goldmane--666569f655--g9s5b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001375f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-g9s5b", "timestamp":"2025-12-12 17:46:38.772722741 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:46:38.821665 containerd[1500]: 2025-12-12 17:46:38.773 [INFO][4211] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:46:38.821665 containerd[1500]: 2025-12-12 17:46:38.773 [INFO][4211] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:46:38.821665 containerd[1500]: 2025-12-12 17:46:38.773 [INFO][4211] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:46:38.821665 containerd[1500]: 2025-12-12 17:46:38.782 [INFO][4211] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" host="localhost" Dec 12 17:46:38.821665 containerd[1500]: 2025-12-12 17:46:38.785 [INFO][4211] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:46:38.821665 containerd[1500]: 2025-12-12 17:46:38.789 [INFO][4211] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:46:38.821665 containerd[1500]: 2025-12-12 17:46:38.790 [INFO][4211] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:38.821665 containerd[1500]: 2025-12-12 17:46:38.792 [INFO][4211] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:38.821665 containerd[1500]: 2025-12-12 17:46:38.792 [INFO][4211] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" host="localhost" Dec 12 17:46:38.821979 containerd[1500]: 2025-12-12 17:46:38.794 [INFO][4211] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80 Dec 12 17:46:38.821979 containerd[1500]: 2025-12-12 17:46:38.797 [INFO][4211] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" host="localhost" Dec 12 17:46:38.821979 containerd[1500]: 2025-12-12 17:46:38.802 [INFO][4211] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" host="localhost" Dec 12 17:46:38.821979 containerd[1500]: 2025-12-12 17:46:38.802 [INFO][4211] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" host="localhost" Dec 12 17:46:38.821979 containerd[1500]: 2025-12-12 17:46:38.802 [INFO][4211] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:46:38.821979 containerd[1500]: 2025-12-12 17:46:38.802 [INFO][4211] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" HandleID="k8s-pod-network.9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" Workload="localhost-k8s-goldmane--666569f655--g9s5b-eth0" Dec 12 17:46:38.822089 containerd[1500]: 2025-12-12 17:46:38.804 [INFO][4197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" Namespace="calico-system" Pod="goldmane-666569f655-g9s5b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g9s5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--g9s5b-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"371f3370-3973-43fa-a344-e5a8edf40083", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-g9s5b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali17a920beebe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:38.822089 containerd[1500]: 2025-12-12 17:46:38.805 [INFO][4197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" Namespace="calico-system" Pod="goldmane-666569f655-g9s5b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g9s5b-eth0" Dec 12 17:46:38.822163 containerd[1500]: 2025-12-12 17:46:38.805 [INFO][4197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17a920beebe ContainerID="9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" Namespace="calico-system" Pod="goldmane-666569f655-g9s5b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g9s5b-eth0" Dec 12 17:46:38.822163 containerd[1500]: 2025-12-12 17:46:38.806 [INFO][4197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" Namespace="calico-system" Pod="goldmane-666569f655-g9s5b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g9s5b-eth0" Dec 12 17:46:38.822200 containerd[1500]: 2025-12-12 17:46:38.808 [INFO][4197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" Namespace="calico-system" Pod="goldmane-666569f655-g9s5b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g9s5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--g9s5b-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"371f3370-3973-43fa-a344-e5a8edf40083", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80", Pod:"goldmane-666569f655-g9s5b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali17a920beebe", MAC:"2e:18:76:8d:69:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:38.822248 containerd[1500]: 2025-12-12 17:46:38.819 [INFO][4197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" Namespace="calico-system" Pod="goldmane-666569f655-g9s5b" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--g9s5b-eth0" Dec 12 17:46:38.848154 containerd[1500]: time="2025-12-12T17:46:38.848100691Z" level=info msg="connecting to shim 9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80" address="unix:///run/containerd/s/af28fd8e92237875f197fb9c7f6d914dc34bc91195828713a1e30f2ff5c3cce6" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:46:38.871932 systemd[1]: Started cri-containerd-9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80.scope - libcontainer container 9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80. Dec 12 17:46:38.883861 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:46:38.903866 containerd[1500]: time="2025-12-12T17:46:38.903813438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-g9s5b,Uid:371f3370-3973-43fa-a344-e5a8edf40083,Namespace:calico-system,Attempt:0,} returns sandbox id \"9dcebebe3e1f2e14b37703bfaf48562374ac85b0adc53c6afea95f24aab61d80\"" Dec 12 17:46:38.905513 containerd[1500]: time="2025-12-12T17:46:38.905479951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:46:39.115104 containerd[1500]: time="2025-12-12T17:46:39.115049516Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:39.115985 containerd[1500]: time="2025-12-12T17:46:39.115947713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:46:39.116046 containerd[1500]: time="2025-12-12T17:46:39.115981433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:46:39.117087 kubelet[2643]: E1212 17:46:39.116906 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:46:39.117087 kubelet[2643]: E1212 17:46:39.116962 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:46:39.117541 kubelet[2643]: E1212 17:46:39.117461 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrqfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g9s5b_calico-system(371f3370-3973-43fa-a344-e5a8edf40083): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:39.118682 kubelet[2643]: E1212 17:46:39.118649 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g9s5b" podUID="371f3370-3973-43fa-a344-e5a8edf40083" Dec 12 17:46:39.701147 kubelet[2643]: I1212 17:46:39.701017 2643 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:46:39.706726 containerd[1500]: time="2025-12-12T17:46:39.706397936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7547c5c-sb784,Uid:34789b1f-3cfd-4180-8a83-14eb6215f63c,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:46:39.832437 systemd-networkd[1415]: cali9230c895d56: Link UP Dec 12 17:46:39.832924 systemd-networkd[1415]: cali9230c895d56: Gained carrier Dec 12 17:46:39.847103 containerd[1500]: 2025-12-12 17:46:39.738 [INFO][4295] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:46:39.847103 containerd[1500]: 2025-12-12 17:46:39.757 [INFO][4295] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67b7547c5c--sb784-eth0 calico-apiserver-67b7547c5c- calico-apiserver 34789b1f-3cfd-4180-8a83-14eb6215f63c 791 0 2025-12-12 17:46:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67b7547c5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67b7547c5c-sb784 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9230c895d56 [] [] }} ContainerID="5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-sb784" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--sb784-" Dec 12 17:46:39.847103 containerd[1500]: 2025-12-12 17:46:39.757 [INFO][4295] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-sb784" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--sb784-eth0" Dec 12 17:46:39.847103 containerd[1500]: 2025-12-12 17:46:39.784 [INFO][4312] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" HandleID="k8s-pod-network.5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" Workload="localhost-k8s-calico--apiserver--67b7547c5c--sb784-eth0" Dec 12 17:46:39.847629 containerd[1500]: 2025-12-12 17:46:39.784 [INFO][4312] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" HandleID="k8s-pod-network.5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" Workload="localhost-k8s-calico--apiserver--67b7547c5c--sb784-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd6c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67b7547c5c-sb784", "timestamp":"2025-12-12 17:46:39.784131324 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:46:39.847629 containerd[1500]: 2025-12-12 17:46:39.784 [INFO][4312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:46:39.847629 containerd[1500]: 2025-12-12 17:46:39.784 [INFO][4312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:46:39.847629 containerd[1500]: 2025-12-12 17:46:39.784 [INFO][4312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:46:39.847629 containerd[1500]: 2025-12-12 17:46:39.794 [INFO][4312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" host="localhost" Dec 12 17:46:39.847629 containerd[1500]: 2025-12-12 17:46:39.798 [INFO][4312] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:46:39.847629 containerd[1500]: 2025-12-12 17:46:39.809 [INFO][4312] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:46:39.847629 containerd[1500]: 2025-12-12 17:46:39.811 [INFO][4312] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:39.847629 containerd[1500]: 2025-12-12 17:46:39.813 [INFO][4312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:39.847629 containerd[1500]: 2025-12-12 17:46:39.813 [INFO][4312] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" host="localhost" Dec 12 17:46:39.848681 containerd[1500]: 2025-12-12 17:46:39.815 [INFO][4312] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10 Dec 12 17:46:39.848681 containerd[1500]: 2025-12-12 17:46:39.820 [INFO][4312] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" host="localhost" Dec 12 17:46:39.848681 containerd[1500]: 2025-12-12 17:46:39.827 [INFO][4312] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" host="localhost" Dec 12 17:46:39.848681 containerd[1500]: 2025-12-12 17:46:39.827 [INFO][4312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" host="localhost" Dec 12 17:46:39.848681 containerd[1500]: 2025-12-12 17:46:39.827 [INFO][4312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:46:39.848681 containerd[1500]: 2025-12-12 17:46:39.827 [INFO][4312] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" HandleID="k8s-pod-network.5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" Workload="localhost-k8s-calico--apiserver--67b7547c5c--sb784-eth0" Dec 12 17:46:39.849480 containerd[1500]: 2025-12-12 17:46:39.830 [INFO][4295] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-sb784" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--sb784-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b7547c5c--sb784-eth0", GenerateName:"calico-apiserver-67b7547c5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"34789b1f-3cfd-4180-8a83-14eb6215f63c", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b7547c5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67b7547c5c-sb784", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9230c895d56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:39.849664 containerd[1500]: 2025-12-12 17:46:39.830 [INFO][4295] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-sb784" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--sb784-eth0" Dec 12 17:46:39.849664 containerd[1500]: 2025-12-12 17:46:39.830 [INFO][4295] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9230c895d56 ContainerID="5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-sb784" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--sb784-eth0" Dec 12 17:46:39.849664 containerd[1500]: 2025-12-12 17:46:39.832 [INFO][4295] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-sb784" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--sb784-eth0" Dec 12 17:46:39.849804 containerd[1500]: 2025-12-12 17:46:39.833 [INFO][4295] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-sb784" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--sb784-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b7547c5c--sb784-eth0", GenerateName:"calico-apiserver-67b7547c5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"34789b1f-3cfd-4180-8a83-14eb6215f63c", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b7547c5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10", Pod:"calico-apiserver-67b7547c5c-sb784", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9230c895d56", MAC:"8e:21:98:a8:50:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:39.849956 containerd[1500]: 2025-12-12 17:46:39.844 [INFO][4295] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-sb784" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--sb784-eth0" Dec 12 17:46:39.852050 kubelet[2643]: E1212 17:46:39.851893 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g9s5b" podUID="371f3370-3973-43fa-a344-e5a8edf40083" Dec 12 17:46:39.881329 containerd[1500]: time="2025-12-12T17:46:39.880858041Z" level=info msg="connecting to shim 5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10" address="unix:///run/containerd/s/7d32ea7dadfac55456d75bcaad14c24b25e94627be9f152e63798c658e6d5a83" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:46:39.902064 systemd[1]: Started cri-containerd-5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10.scope - libcontainer container 5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10. Dec 12 17:46:39.916225 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:46:39.941593 containerd[1500]: time="2025-12-12T17:46:39.941553973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7547c5c-sb784,Uid:34789b1f-3cfd-4180-8a83-14eb6215f63c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5ebf7a70733b8b0a25b9f4111642d80fe571c099ac9074494086f0528feb9a10\"" Dec 12 17:46:39.943185 containerd[1500]: time="2025-12-12T17:46:39.943160087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:46:40.088082 systemd-networkd[1415]: vxlan.calico: Link UP Dec 12 17:46:40.088089 systemd-networkd[1415]: vxlan.calico: Gained carrier Dec 12 17:46:40.140211 containerd[1500]: time="2025-12-12T17:46:40.140134598Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:40.141392 containerd[1500]: time="2025-12-12T17:46:40.141349634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:46:40.141443 containerd[1500]: time="2025-12-12T17:46:40.141407674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:46:40.141622 kubelet[2643]: E1212 17:46:40.141570 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:46:40.142037 kubelet[2643]: E1212 17:46:40.141634 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:46:40.142194 kubelet[2643]: E1212 17:46:40.141800 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdf99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67b7547c5c-sb784_calico-apiserver(34789b1f-3cfd-4180-8a83-14eb6215f63c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:40.143819 kubelet[2643]: E1212 17:46:40.143773 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7547c5c-sb784" podUID="34789b1f-3cfd-4180-8a83-14eb6215f63c" Dec 12 17:46:40.194885 systemd-networkd[1415]: cali17a920beebe: Gained IPv6LL Dec 12 17:46:40.709337 containerd[1500]: time="2025-12-12T17:46:40.709289946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bc564446d-p8ff5,Uid:62a80cbc-f140-42e5-895b-a84f9cbc2fab,Namespace:calico-system,Attempt:0,}" Dec 12 17:46:40.710017 containerd[1500]: time="2025-12-12T17:46:40.709830744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k5kkz,Uid:0cf10505-a996-4f57-b08c-e0c0817270b9,Namespace:kube-system,Attempt:0,}" Dec 12 17:46:40.710017 containerd[1500]: time="2025-12-12T17:46:40.709878864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7547c5c-fcbnc,Uid:904021b1-af08-4d93-b11e-a0cb970db885,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:46:40.828338 systemd-networkd[1415]: cali29c498bdaed: Link UP Dec 12 17:46:40.829342 systemd-networkd[1415]: cali29c498bdaed: Gained carrier Dec 12 17:46:40.843366 containerd[1500]: 2025-12-12 17:46:40.753 [INFO][4509] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-eth0 calico-apiserver-67b7547c5c- calico-apiserver 904021b1-af08-4d93-b11e-a0cb970db885 794 0 2025-12-12 17:46:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67b7547c5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67b7547c5c-fcbnc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali29c498bdaed [] [] }} ContainerID="60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-fcbnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-" Dec 12 17:46:40.843366 containerd[1500]: 2025-12-12 17:46:40.753 [INFO][4509] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-fcbnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-eth0" Dec 12 17:46:40.843366 containerd[1500]: 2025-12-12 17:46:40.789 [INFO][4553] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" HandleID="k8s-pod-network.60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" Workload="localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-eth0" Dec 12 17:46:40.843732 containerd[1500]: 2025-12-12 17:46:40.790 [INFO][4553] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" HandleID="k8s-pod-network.60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" Workload="localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030e9e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67b7547c5c-fcbnc", "timestamp":"2025-12-12 17:46:40.78981901 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:46:40.843732 containerd[1500]: 2025-12-12 17:46:40.790 [INFO][4553] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:46:40.843732 containerd[1500]: 2025-12-12 17:46:40.790 [INFO][4553] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:46:40.843732 containerd[1500]: 2025-12-12 17:46:40.790 [INFO][4553] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:46:40.843732 containerd[1500]: 2025-12-12 17:46:40.799 [INFO][4553] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" host="localhost" Dec 12 17:46:40.843732 containerd[1500]: 2025-12-12 17:46:40.803 [INFO][4553] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:46:40.843732 containerd[1500]: 2025-12-12 17:46:40.806 [INFO][4553] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:46:40.843732 containerd[1500]: 2025-12-12 17:46:40.808 [INFO][4553] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:40.843732 containerd[1500]: 2025-12-12 17:46:40.812 [INFO][4553] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:40.843732 containerd[1500]: 2025-12-12 17:46:40.812 [INFO][4553] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" host="localhost" Dec 12 17:46:40.844018 containerd[1500]: 2025-12-12 17:46:40.813 [INFO][4553] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a Dec 12 17:46:40.844018 containerd[1500]: 2025-12-12 17:46:40.817 [INFO][4553] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" host="localhost" Dec 12 17:46:40.844018 containerd[1500]: 2025-12-12 17:46:40.823 [INFO][4553] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" host="localhost" Dec 12 17:46:40.844018 containerd[1500]: 2025-12-12 17:46:40.823 [INFO][4553] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" host="localhost" Dec 12 17:46:40.844018 containerd[1500]: 2025-12-12 17:46:40.823 [INFO][4553] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:46:40.844018 containerd[1500]: 2025-12-12 17:46:40.823 [INFO][4553] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" HandleID="k8s-pod-network.60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" Workload="localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-eth0" Dec 12 17:46:40.844134 containerd[1500]: 2025-12-12 17:46:40.826 [INFO][4509] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-fcbnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-eth0", GenerateName:"calico-apiserver-67b7547c5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"904021b1-af08-4d93-b11e-a0cb970db885", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b7547c5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67b7547c5c-fcbnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29c498bdaed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:40.844187 containerd[1500]: 2025-12-12 17:46:40.826 [INFO][4509] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-fcbnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-eth0" Dec 12 17:46:40.844187 containerd[1500]: 2025-12-12 17:46:40.826 [INFO][4509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29c498bdaed ContainerID="60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-fcbnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-eth0" Dec 12 17:46:40.844187 containerd[1500]: 2025-12-12 17:46:40.830 [INFO][4509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-fcbnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-eth0" Dec 12 17:46:40.844250 containerd[1500]: 2025-12-12 17:46:40.830 [INFO][4509] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-fcbnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-eth0", GenerateName:"calico-apiserver-67b7547c5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"904021b1-af08-4d93-b11e-a0cb970db885", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b7547c5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a", Pod:"calico-apiserver-67b7547c5c-fcbnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29c498bdaed", MAC:"7a:69:c7:95:4a:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:40.844299 containerd[1500]: 2025-12-12 17:46:40.839 [INFO][4509] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" Namespace="calico-apiserver" Pod="calico-apiserver-67b7547c5c-fcbnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b7547c5c--fcbnc-eth0" Dec 12 17:46:40.854941 kubelet[2643]: E1212 17:46:40.854702 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g9s5b" podUID="371f3370-3973-43fa-a344-e5a8edf40083" Dec 12 17:46:40.854941 kubelet[2643]: E1212 17:46:40.854765 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7547c5c-sb784" podUID="34789b1f-3cfd-4180-8a83-14eb6215f63c" Dec 12 17:46:40.889649 containerd[1500]: time="2025-12-12T17:46:40.889598843Z" level=info msg="connecting to shim 60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a" address="unix:///run/containerd/s/700b5c311cd7551db139192e34772604365702a1dc8e6339c8820e0d648ef97d" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:46:40.922945 systemd[1]: Started cri-containerd-60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a.scope - libcontainer container 60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a. Dec 12 17:46:40.938326 systemd-networkd[1415]: calida5dd5b6c05: Link UP Dec 12 17:46:40.938459 systemd-networkd[1415]: calida5dd5b6c05: Gained carrier Dec 12 17:46:40.956884 systemd[1]: Started sshd@8-10.0.0.124:22-10.0.0.1:39286.service - OpenSSH per-connection server daemon (10.0.0.1:39286). Dec 12 17:46:40.957695 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:46:40.963453 systemd-networkd[1415]: cali9230c895d56: Gained IPv6LL Dec 12 17:46:40.970017 containerd[1500]: 2025-12-12 17:46:40.763 [INFO][4533] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--k5kkz-eth0 coredns-668d6bf9bc- kube-system 0cf10505-a996-4f57-b08c-e0c0817270b9 795 0 2025-12-12 17:46:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-k5kkz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida5dd5b6c05 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kkz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k5kkz-" Dec 12 17:46:40.970017 containerd[1500]: 2025-12-12 17:46:40.763 [INFO][4533] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kkz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k5kkz-eth0" Dec 12 17:46:40.970017 containerd[1500]: 2025-12-12 17:46:40.789 [INFO][4561] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" HandleID="k8s-pod-network.9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" Workload="localhost-k8s-coredns--668d6bf9bc--k5kkz-eth0" Dec 12 17:46:40.970191 containerd[1500]: 2025-12-12 17:46:40.790 [INFO][4561] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" HandleID="k8s-pod-network.9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" Workload="localhost-k8s-coredns--668d6bf9bc--k5kkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001216e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-k5kkz", "timestamp":"2025-12-12 17:46:40.78981793 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:46:40.970191 containerd[1500]: 2025-12-12 17:46:40.790 [INFO][4561] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:46:40.970191 containerd[1500]: 2025-12-12 17:46:40.823 [INFO][4561] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:46:40.970191 containerd[1500]: 2025-12-12 17:46:40.823 [INFO][4561] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:46:40.970191 containerd[1500]: 2025-12-12 17:46:40.902 [INFO][4561] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" host="localhost" Dec 12 17:46:40.970191 containerd[1500]: 2025-12-12 17:46:40.908 [INFO][4561] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:46:40.970191 containerd[1500]: 2025-12-12 17:46:40.913 [INFO][4561] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:46:40.970191 containerd[1500]: 2025-12-12 17:46:40.916 [INFO][4561] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:40.970191 containerd[1500]: 2025-12-12 17:46:40.918 [INFO][4561] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:40.970191 containerd[1500]: 2025-12-12 17:46:40.918 [INFO][4561] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" host="localhost" Dec 12 17:46:40.970391 containerd[1500]: 2025-12-12 17:46:40.920 [INFO][4561] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8 Dec 12 17:46:40.970391 containerd[1500]: 2025-12-12 17:46:40.925 [INFO][4561] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" host="localhost" Dec 12 17:46:40.970391 containerd[1500]: 2025-12-12 17:46:40.932 [INFO][4561] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" host="localhost" Dec 12 17:46:40.970391 containerd[1500]: 2025-12-12 17:46:40.932 [INFO][4561] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" host="localhost" Dec 12 17:46:40.970391 containerd[1500]: 2025-12-12 17:46:40.932 [INFO][4561] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:46:40.970391 containerd[1500]: 2025-12-12 17:46:40.932 [INFO][4561] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" HandleID="k8s-pod-network.9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" Workload="localhost-k8s-coredns--668d6bf9bc--k5kkz-eth0" Dec 12 17:46:40.970507 containerd[1500]: 2025-12-12 17:46:40.936 [INFO][4533] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kkz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k5kkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--k5kkz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0cf10505-a996-4f57-b08c-e0c0817270b9", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-k5kkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida5dd5b6c05", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:40.970571 containerd[1500]: 2025-12-12 17:46:40.936 [INFO][4533] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kkz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k5kkz-eth0" Dec 12 17:46:40.970571 containerd[1500]: 2025-12-12 17:46:40.936 [INFO][4533] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida5dd5b6c05 ContainerID="9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kkz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k5kkz-eth0" Dec 12 17:46:40.970571 containerd[1500]: 2025-12-12 17:46:40.938 [INFO][4533] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kkz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k5kkz-eth0" Dec 12 17:46:40.970634 containerd[1500]: 2025-12-12 17:46:40.938 [INFO][4533] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kkz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k5kkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--k5kkz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0cf10505-a996-4f57-b08c-e0c0817270b9", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8", Pod:"coredns-668d6bf9bc-k5kkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida5dd5b6c05", MAC:"02:d6:d3:27:f0:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:40.970634 containerd[1500]: 2025-12-12 17:46:40.965 [INFO][4533] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" Namespace="kube-system" Pod="coredns-668d6bf9bc-k5kkz" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--k5kkz-eth0" Dec 12 17:46:40.997480 containerd[1500]: time="2025-12-12T17:46:40.997427926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b7547c5c-fcbnc,Uid:904021b1-af08-4d93-b11e-a0cb970db885,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"60d34d7c09dc36aedf5691ed67f1e17cf341f3f89617f486e6d32d231da9b76a\"" Dec 12 17:46:41.000716 containerd[1500]: time="2025-12-12T17:46:41.000108077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:46:41.012957 containerd[1500]: time="2025-12-12T17:46:41.012907470Z" level=info msg="connecting to shim 9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8" address="unix:///run/containerd/s/8fcfe36ce096715a26e6f06164578846c09afe919d30b3660b8aa8153e413e83" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:46:41.038004 systemd[1]: Started cri-containerd-9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8.scope - libcontainer container 9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8. Dec 12 17:46:41.039165 sshd[4632]: Accepted publickey for core from 10.0.0.1 port 39286 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:46:41.041139 sshd-session[4632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:46:41.046644 systemd-logind[1483]: New session 9 of user core. Dec 12 17:46:41.049485 systemd-networkd[1415]: califd51aec213f: Link UP Dec 12 17:46:41.050474 systemd-networkd[1415]: califd51aec213f: Gained carrier Dec 12 17:46:41.051072 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:46:41.056641 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:40.761 [INFO][4508] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-eth0 calico-kube-controllers-6bc564446d- calico-system 62a80cbc-f140-42e5-895b-a84f9cbc2fab 793 0 2025-12-12 17:46:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bc564446d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6bc564446d-p8ff5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califd51aec213f [] [] }} ContainerID="d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" Namespace="calico-system" Pod="calico-kube-controllers-6bc564446d-p8ff5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-" Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:40.761 [INFO][4508] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" Namespace="calico-system" Pod="calico-kube-controllers-6bc564446d-p8ff5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-eth0" Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:40.791 [INFO][4560] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" HandleID="k8s-pod-network.d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" Workload="localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-eth0" Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:40.791 [INFO][4560] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" HandleID="k8s-pod-network.d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" Workload="localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6bc564446d-p8ff5", "timestamp":"2025-12-12 17:46:40.791665163 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:40.791 [INFO][4560] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:40.932 [INFO][4560] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:40.933 [INFO][4560] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:41.002 [INFO][4560] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" host="localhost" Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:41.008 [INFO][4560] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:41.013 [INFO][4560] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:41.015 [INFO][4560] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:41.019 [INFO][4560] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:41.019 [INFO][4560] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" host="localhost" Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:41.021 [INFO][4560] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696 Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:41.028 [INFO][4560] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" host="localhost" Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:41.036 [INFO][4560] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" host="localhost" Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:41.036 [INFO][4560] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" host="localhost" Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:41.036 [INFO][4560] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:46:41.069720 containerd[1500]: 2025-12-12 17:46:41.036 [INFO][4560] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" HandleID="k8s-pod-network.d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" Workload="localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-eth0" Dec 12 17:46:41.070310 containerd[1500]: 2025-12-12 17:46:41.044 [INFO][4508] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" Namespace="calico-system" Pod="calico-kube-controllers-6bc564446d-p8ff5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-eth0", GenerateName:"calico-kube-controllers-6bc564446d-", Namespace:"calico-system", SelfLink:"", UID:"62a80cbc-f140-42e5-895b-a84f9cbc2fab", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bc564446d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6bc564446d-p8ff5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd51aec213f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:41.070310 containerd[1500]: 2025-12-12 17:46:41.045 [INFO][4508] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" Namespace="calico-system" Pod="calico-kube-controllers-6bc564446d-p8ff5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-eth0" Dec 12 17:46:41.070310 containerd[1500]: 2025-12-12 17:46:41.045 [INFO][4508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd51aec213f ContainerID="d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" Namespace="calico-system" Pod="calico-kube-controllers-6bc564446d-p8ff5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-eth0" Dec 12 17:46:41.070310 containerd[1500]: 2025-12-12 17:46:41.051 [INFO][4508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" Namespace="calico-system" Pod="calico-kube-controllers-6bc564446d-p8ff5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-eth0" Dec 12 17:46:41.070310 containerd[1500]: 2025-12-12 17:46:41.052 [INFO][4508] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" Namespace="calico-system" Pod="calico-kube-controllers-6bc564446d-p8ff5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-eth0", GenerateName:"calico-kube-controllers-6bc564446d-", Namespace:"calico-system", SelfLink:"", UID:"62a80cbc-f140-42e5-895b-a84f9cbc2fab", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bc564446d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696", Pod:"calico-kube-controllers-6bc564446d-p8ff5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd51aec213f", MAC:"52:12:6a:e3:02:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:41.070310 containerd[1500]: 2025-12-12 17:46:41.064 [INFO][4508] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" Namespace="calico-system" Pod="calico-kube-controllers-6bc564446d-p8ff5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bc564446d--p8ff5-eth0" Dec 12 17:46:41.084488 containerd[1500]: time="2025-12-12T17:46:41.084454773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k5kkz,Uid:0cf10505-a996-4f57-b08c-e0c0817270b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8\"" Dec 12 17:46:41.089798 containerd[1500]: time="2025-12-12T17:46:41.089479314Z" level=info msg="CreateContainer within sandbox \"9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:46:41.098845 containerd[1500]: time="2025-12-12T17:46:41.098790841Z" level=info msg="connecting to shim d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696" address="unix:///run/containerd/s/57f8968a6f12ec2938361fabad3c5deabeb456f0d1d5a3ad5c76d09bdc7d8705" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:46:41.101250 containerd[1500]: time="2025-12-12T17:46:41.101214872Z" level=info msg="Container 5c4165f5aa755647c2c777843a95418ffde355c923944db06d080d7cccc99389: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:46:41.107005 containerd[1500]: time="2025-12-12T17:46:41.106964291Z" level=info msg="CreateContainer within sandbox \"9b14355ea3c6eaa6609976fcd4e4bd51614f99efbffd6ac1845980a9bb77f5f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c4165f5aa755647c2c777843a95418ffde355c923944db06d080d7cccc99389\"" Dec 12 17:46:41.111608 containerd[1500]: time="2025-12-12T17:46:41.111577075Z" level=info msg="StartContainer for \"5c4165f5aa755647c2c777843a95418ffde355c923944db06d080d7cccc99389\"" Dec 12 17:46:41.115779 containerd[1500]: time="2025-12-12T17:46:41.115359821Z" level=info msg="connecting to shim 5c4165f5aa755647c2c777843a95418ffde355c923944db06d080d7cccc99389" address="unix:///run/containerd/s/8fcfe36ce096715a26e6f06164578846c09afe919d30b3660b8aa8153e413e83" protocol=ttrpc version=3 Dec 12 17:46:41.119906 systemd[1]: Started cri-containerd-d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696.scope - libcontainer container d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696. Dec 12 17:46:41.140049 systemd[1]: Started cri-containerd-5c4165f5aa755647c2c777843a95418ffde355c923944db06d080d7cccc99389.scope - libcontainer container 5c4165f5aa755647c2c777843a95418ffde355c923944db06d080d7cccc99389. Dec 12 17:46:41.146938 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:46:41.174162 containerd[1500]: time="2025-12-12T17:46:41.174118090Z" level=info msg="StartContainer for \"5c4165f5aa755647c2c777843a95418ffde355c923944db06d080d7cccc99389\" returns successfully" Dec 12 17:46:41.183846 containerd[1500]: time="2025-12-12T17:46:41.183783855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bc564446d-p8ff5,Uid:62a80cbc-f140-42e5-895b-a84f9cbc2fab,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7caa080a6abcea40349432396417e7bf57eeba18f19cadc3397827ef74a9696\"" Dec 12 17:46:41.185495 containerd[1500]: time="2025-12-12T17:46:41.185374409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:41.186997 containerd[1500]: time="2025-12-12T17:46:41.186787244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:46:41.187652 containerd[1500]: time="2025-12-12T17:46:41.186957123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:46:41.187902 kubelet[2643]: E1212 17:46:41.187856 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:46:41.189292 kubelet[2643]: E1212 17:46:41.187904 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:46:41.189292 kubelet[2643]: E1212 17:46:41.188505 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckqtn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67b7547c5c-fcbnc_calico-apiserver(904021b1-af08-4d93-b11e-a0cb970db885): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:41.189462 containerd[1500]: time="2025-12-12T17:46:41.188757117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:46:41.191011 kubelet[2643]: E1212 17:46:41.190933 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7547c5c-fcbnc" podUID="904021b1-af08-4d93-b11e-a0cb970db885" Dec 12 17:46:41.246913 sshd[4691]: Connection closed by 10.0.0.1 port 39286 Dec 12 17:46:41.248336 sshd-session[4632]: pam_unix(sshd:session): session closed for user core Dec 12 17:46:41.253535 systemd[1]: sshd@8-10.0.0.124:22-10.0.0.1:39286.service: Deactivated successfully. Dec 12 17:46:41.255365 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:46:41.256861 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:46:41.258595 systemd-logind[1483]: Removed session 9. Dec 12 17:46:41.346904 systemd-networkd[1415]: vxlan.calico: Gained IPv6LL Dec 12 17:46:41.379002 containerd[1500]: time="2025-12-12T17:46:41.378936271Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:41.379822 containerd[1500]: time="2025-12-12T17:46:41.379737789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:46:41.379883 containerd[1500]: time="2025-12-12T17:46:41.379825188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:46:41.380266 kubelet[2643]: E1212 17:46:41.380023 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:46:41.380266 kubelet[2643]: E1212 17:46:41.380086 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:46:41.380266 kubelet[2643]: E1212 17:46:41.380212 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47qrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bc564446d-p8ff5_calico-system(62a80cbc-f140-42e5-895b-a84f9cbc2fab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:41.381555 kubelet[2643]: E1212 17:46:41.381511 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bc564446d-p8ff5" podUID="62a80cbc-f140-42e5-895b-a84f9cbc2fab" Dec 12 17:46:41.706562 containerd[1500]: time="2025-12-12T17:46:41.706515211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7rtzv,Uid:3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa,Namespace:kube-system,Attempt:0,}" Dec 12 17:46:41.806192 systemd-networkd[1415]: cali5eb0ab51e96: Link UP Dec 12 17:46:41.806684 systemd-networkd[1415]: cali5eb0ab51e96: Gained carrier Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.741 [INFO][4802] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--7rtzv-eth0 coredns-668d6bf9bc- kube-system 3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa 790 0 2025-12-12 17:46:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-7rtzv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5eb0ab51e96 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rtzv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7rtzv-" Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.742 [INFO][4802] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rtzv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7rtzv-eth0" Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.765 [INFO][4815] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" HandleID="k8s-pod-network.c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" Workload="localhost-k8s-coredns--668d6bf9bc--7rtzv-eth0" Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.766 [INFO][4815] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" HandleID="k8s-pod-network.c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" Workload="localhost-k8s-coredns--668d6bf9bc--7rtzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-7rtzv", "timestamp":"2025-12-12 17:46:41.765958677 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.766 [INFO][4815] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.766 [INFO][4815] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.766 [INFO][4815] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.775 [INFO][4815] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" host="localhost" Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.782 [INFO][4815] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.786 [INFO][4815] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.788 [INFO][4815] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.790 [INFO][4815] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.790 [INFO][4815] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" host="localhost" Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.791 [INFO][4815] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394 Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.795 [INFO][4815] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" host="localhost" Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.801 [INFO][4815] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" host="localhost" Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.801 [INFO][4815] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" host="localhost" Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.801 [INFO][4815] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:46:41.822348 containerd[1500]: 2025-12-12 17:46:41.801 [INFO][4815] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" HandleID="k8s-pod-network.c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" Workload="localhost-k8s-coredns--668d6bf9bc--7rtzv-eth0" Dec 12 17:46:41.823227 containerd[1500]: 2025-12-12 17:46:41.803 [INFO][4802] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rtzv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7rtzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7rtzv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-7rtzv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5eb0ab51e96", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:41.823227 containerd[1500]: 2025-12-12 17:46:41.803 [INFO][4802] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rtzv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7rtzv-eth0" Dec 12 17:46:41.823227 containerd[1500]: 2025-12-12 17:46:41.803 [INFO][4802] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5eb0ab51e96 ContainerID="c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rtzv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7rtzv-eth0" Dec 12 17:46:41.823227 containerd[1500]: 2025-12-12 17:46:41.806 [INFO][4802] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rtzv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7rtzv-eth0" Dec 12 17:46:41.823227 containerd[1500]: 2025-12-12 17:46:41.806 [INFO][4802] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rtzv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7rtzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--7rtzv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394", Pod:"coredns-668d6bf9bc-7rtzv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5eb0ab51e96", MAC:"1a:20:30:2c:da:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:41.823227 containerd[1500]: 2025-12-12 17:46:41.818 [INFO][4802] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" Namespace="kube-system" Pod="coredns-668d6bf9bc-7rtzv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--7rtzv-eth0" Dec 12 17:46:41.858185 kubelet[2643]: E1212 17:46:41.858109 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bc564446d-p8ff5" podUID="62a80cbc-f140-42e5-895b-a84f9cbc2fab" Dec 12 17:46:41.877165 kubelet[2643]: E1212 17:46:41.877126 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7547c5c-sb784" podUID="34789b1f-3cfd-4180-8a83-14eb6215f63c" Dec 12 17:46:41.878046 kubelet[2643]: E1212 17:46:41.878013 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7547c5c-fcbnc" podUID="904021b1-af08-4d93-b11e-a0cb970db885" Dec 12 17:46:41.893855 kubelet[2643]: I1212 17:46:41.893788 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k5kkz" podStartSLOduration=39.893763176 podStartE2EDuration="39.893763176s" podCreationTimestamp="2025-12-12 17:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:46:41.891689864 +0000 UTC m=+47.263712710" watchObservedRunningTime="2025-12-12 17:46:41.893763176 +0000 UTC m=+47.265785982" Dec 12 17:46:41.904992 containerd[1500]: time="2025-12-12T17:46:41.904938776Z" level=info msg="connecting to shim c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394" address="unix:///run/containerd/s/c127f041824cb9ebfcf724b01812064f6a64b841e22260b65afff41e3937a687" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:46:41.946397 systemd[1]: Started cri-containerd-c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394.scope - libcontainer container c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394. Dec 12 17:46:41.962469 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:46:41.983669 containerd[1500]: time="2025-12-12T17:46:41.983576933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7rtzv,Uid:3b481ea6-5954-4cd3-8bf4-dd4b768ccfaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394\"" Dec 12 17:46:41.987283 containerd[1500]: time="2025-12-12T17:46:41.987244040Z" level=info msg="CreateContainer within sandbox \"c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:46:41.999062 containerd[1500]: time="2025-12-12T17:46:41.999007437Z" level=info msg="Container 2191334f92c394adf5873d07d54b68a94ca799976b2158ec437545431fa99e40: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:46:42.005659 containerd[1500]: time="2025-12-12T17:46:42.005603214Z" level=info msg="CreateContainer within sandbox \"c466175ed288b3a60eb6f822bb2fc41677812866b07c245174d9af93961d4394\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2191334f92c394adf5873d07d54b68a94ca799976b2158ec437545431fa99e40\"" Dec 12 17:46:42.006151 containerd[1500]: time="2025-12-12T17:46:42.006127092Z" level=info msg="StartContainer for \"2191334f92c394adf5873d07d54b68a94ca799976b2158ec437545431fa99e40\"" Dec 12 17:46:42.007160 containerd[1500]: time="2025-12-12T17:46:42.007132088Z" level=info msg="connecting to shim 2191334f92c394adf5873d07d54b68a94ca799976b2158ec437545431fa99e40" address="unix:///run/containerd/s/c127f041824cb9ebfcf724b01812064f6a64b841e22260b65afff41e3937a687" protocol=ttrpc version=3 Dec 12 17:46:42.030945 systemd[1]: Started cri-containerd-2191334f92c394adf5873d07d54b68a94ca799976b2158ec437545431fa99e40.scope - libcontainer container 2191334f92c394adf5873d07d54b68a94ca799976b2158ec437545431fa99e40. Dec 12 17:46:42.076112 containerd[1500]: time="2025-12-12T17:46:42.076050565Z" level=info msg="StartContainer for \"2191334f92c394adf5873d07d54b68a94ca799976b2158ec437545431fa99e40\" returns successfully" Dec 12 17:46:42.370881 systemd-networkd[1415]: calida5dd5b6c05: Gained IPv6LL Dec 12 17:46:42.754952 systemd-networkd[1415]: cali29c498bdaed: Gained IPv6LL Dec 12 17:46:42.880700 kubelet[2643]: E1212 17:46:42.880648 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7547c5c-fcbnc" podUID="904021b1-af08-4d93-b11e-a0cb970db885" Dec 12 17:46:42.881911 kubelet[2643]: E1212 17:46:42.881673 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bc564446d-p8ff5" podUID="62a80cbc-f140-42e5-895b-a84f9cbc2fab" Dec 12 17:46:42.922164 kubelet[2643]: I1212 17:46:42.922090 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7rtzv" podStartSLOduration=40.922075654 podStartE2EDuration="40.922075654s" podCreationTimestamp="2025-12-12 17:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:46:42.906917268 +0000 UTC m=+48.278940114" watchObservedRunningTime="2025-12-12 17:46:42.922075654 +0000 UTC m=+48.294098500" Dec 12 17:46:42.946873 systemd-networkd[1415]: califd51aec213f: Gained IPv6LL Dec 12 17:46:43.650918 systemd-networkd[1415]: cali5eb0ab51e96: Gained IPv6LL Dec 12 17:46:43.706702 containerd[1500]: time="2025-12-12T17:46:43.706654806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bpcft,Uid:d7faeb01-da6d-4b13-a976-32d40fd38bd7,Namespace:calico-system,Attempt:0,}" Dec 12 17:46:43.829313 systemd-networkd[1415]: califb023635928: Link UP Dec 12 17:46:43.829504 systemd-networkd[1415]: califb023635928: Gained carrier Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.753 [INFO][4922] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bpcft-eth0 csi-node-driver- calico-system d7faeb01-da6d-4b13-a976-32d40fd38bd7 698 0 2025-12-12 17:46:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bpcft eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califb023635928 [] [] }} ContainerID="f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" Namespace="calico-system" Pod="csi-node-driver-bpcft" WorkloadEndpoint="localhost-k8s-csi--node--driver--bpcft-" Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.753 [INFO][4922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" Namespace="calico-system" Pod="csi-node-driver-bpcft" WorkloadEndpoint="localhost-k8s-csi--node--driver--bpcft-eth0" Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.786 [INFO][4935] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" HandleID="k8s-pod-network.f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" Workload="localhost-k8s-csi--node--driver--bpcft-eth0" Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.786 [INFO][4935] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" HandleID="k8s-pod-network.f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" Workload="localhost-k8s-csi--node--driver--bpcft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000119630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bpcft", "timestamp":"2025-12-12 17:46:43.78612765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.786 [INFO][4935] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.786 [INFO][4935] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.786 [INFO][4935] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.797 [INFO][4935] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" host="localhost" Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.803 [INFO][4935] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.807 [INFO][4935] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.809 [INFO][4935] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.812 [INFO][4935] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.812 [INFO][4935] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" host="localhost" Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.815 [INFO][4935] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.819 [INFO][4935] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" host="localhost" Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.825 [INFO][4935] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" host="localhost" Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.825 [INFO][4935] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" host="localhost" Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.825 [INFO][4935] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:46:43.845079 containerd[1500]: 2025-12-12 17:46:43.825 [INFO][4935] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" HandleID="k8s-pod-network.f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" Workload="localhost-k8s-csi--node--driver--bpcft-eth0" Dec 12 17:46:43.845575 containerd[1500]: 2025-12-12 17:46:43.827 [INFO][4922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" Namespace="calico-system" Pod="csi-node-driver-bpcft" WorkloadEndpoint="localhost-k8s-csi--node--driver--bpcft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bpcft-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7faeb01-da6d-4b13-a976-32d40fd38bd7", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bpcft", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb023635928", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:43.845575 containerd[1500]: 2025-12-12 17:46:43.827 [INFO][4922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" Namespace="calico-system" Pod="csi-node-driver-bpcft" WorkloadEndpoint="localhost-k8s-csi--node--driver--bpcft-eth0" Dec 12 17:46:43.845575 containerd[1500]: 2025-12-12 17:46:43.827 [INFO][4922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb023635928 ContainerID="f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" Namespace="calico-system" Pod="csi-node-driver-bpcft" WorkloadEndpoint="localhost-k8s-csi--node--driver--bpcft-eth0" Dec 12 17:46:43.845575 containerd[1500]: 2025-12-12 17:46:43.830 [INFO][4922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" Namespace="calico-system" Pod="csi-node-driver-bpcft" WorkloadEndpoint="localhost-k8s-csi--node--driver--bpcft-eth0" Dec 12 17:46:43.845575 containerd[1500]: 2025-12-12 17:46:43.831 [INFO][4922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" Namespace="calico-system" Pod="csi-node-driver-bpcft" WorkloadEndpoint="localhost-k8s-csi--node--driver--bpcft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bpcft-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d7faeb01-da6d-4b13-a976-32d40fd38bd7", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e", Pod:"csi-node-driver-bpcft", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb023635928", MAC:"0e:f7:99:12:92:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:46:43.845575 containerd[1500]: 2025-12-12 17:46:43.841 [INFO][4922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" Namespace="calico-system" Pod="csi-node-driver-bpcft" WorkloadEndpoint="localhost-k8s-csi--node--driver--bpcft-eth0" Dec 12 17:46:43.863439 containerd[1500]: time="2025-12-12T17:46:43.863322743Z" level=info msg="connecting to shim f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e" address="unix:///run/containerd/s/b416b7cee3007488668032e773397bb489f14dea413bc115574c33e8bddf1a35" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:46:43.881916 systemd[1]: Started cri-containerd-f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e.scope - libcontainer container f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e. Dec 12 17:46:43.904023 systemd-resolved[1418]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:46:43.923982 containerd[1500]: time="2025-12-12T17:46:43.923941932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bpcft,Uid:d7faeb01-da6d-4b13-a976-32d40fd38bd7,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8b5e024ee9e83ea2ec450839093d26a9204183070ada9652072c228783d007e\"" Dec 12 17:46:43.925525 containerd[1500]: time="2025-12-12T17:46:43.925497127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:46:44.127261 containerd[1500]: time="2025-12-12T17:46:44.127218514Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:44.128403 containerd[1500]: time="2025-12-12T17:46:44.128362150Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:46:44.128729 containerd[1500]: time="2025-12-12T17:46:44.128418870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:46:44.128874 kubelet[2643]: E1212 17:46:44.128840 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:46:44.129385 kubelet[2643]: E1212 17:46:44.129171 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:46:44.129385 kubelet[2643]: E1212 17:46:44.129322 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qlrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bpcft_calico-system(d7faeb01-da6d-4b13-a976-32d40fd38bd7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:44.131334 containerd[1500]: time="2025-12-12T17:46:44.131276340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:46:44.332418 containerd[1500]: time="2025-12-12T17:46:44.332358935Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:44.333345 containerd[1500]: time="2025-12-12T17:46:44.333270452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:46:44.333494 containerd[1500]: time="2025-12-12T17:46:44.333356571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:46:44.333568 kubelet[2643]: E1212 17:46:44.333515 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:46:44.333617 kubelet[2643]: E1212 17:46:44.333578 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:46:44.333732 kubelet[2643]: E1212 17:46:44.333691 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qlrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bpcft_calico-system(d7faeb01-da6d-4b13-a976-32d40fd38bd7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:44.335025 kubelet[2643]: E1212 17:46:44.334966 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bpcft" podUID="d7faeb01-da6d-4b13-a976-32d40fd38bd7" Dec 12 17:46:44.890471 kubelet[2643]: E1212 17:46:44.887161 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bpcft" podUID="d7faeb01-da6d-4b13-a976-32d40fd38bd7" Dec 12 17:46:45.762988 systemd-networkd[1415]: califb023635928: Gained IPv6LL Dec 12 17:46:45.888592 kubelet[2643]: E1212 17:46:45.888512 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bpcft" podUID="d7faeb01-da6d-4b13-a976-32d40fd38bd7" Dec 12 17:46:46.264186 systemd[1]: Started sshd@9-10.0.0.124:22-10.0.0.1:39296.service - OpenSSH per-connection server daemon (10.0.0.1:39296). Dec 12 17:46:46.329835 sshd[5010]: Accepted publickey for core from 10.0.0.1 port 39296 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:46:46.331236 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:46:46.337136 systemd-logind[1483]: New session 10 of user core. Dec 12 17:46:46.343210 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:46:46.511957 sshd[5013]: Connection closed by 10.0.0.1 port 39296 Dec 12 17:46:46.512503 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Dec 12 17:46:46.522140 systemd[1]: sshd@9-10.0.0.124:22-10.0.0.1:39296.service: Deactivated successfully. Dec 12 17:46:46.524343 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:46:46.525836 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:46:46.527287 systemd-logind[1483]: Removed session 10. Dec 12 17:46:46.528739 systemd[1]: Started sshd@10-10.0.0.124:22-10.0.0.1:39306.service - OpenSSH per-connection server daemon (10.0.0.1:39306). Dec 12 17:46:46.586620 sshd[5032]: Accepted publickey for core from 10.0.0.1 port 39306 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:46:46.588416 sshd-session[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:46:46.592893 systemd-logind[1483]: New session 11 of user core. Dec 12 17:46:46.603970 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:46:46.708348 containerd[1500]: time="2025-12-12T17:46:46.707850888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:46:46.776550 sshd[5035]: Connection closed by 10.0.0.1 port 39306 Dec 12 17:46:46.777828 sshd-session[5032]: pam_unix(sshd:session): session closed for user core Dec 12 17:46:46.792501 systemd[1]: sshd@10-10.0.0.124:22-10.0.0.1:39306.service: Deactivated successfully. Dec 12 17:46:46.796044 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:46:46.800826 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:46:46.801983 systemd[1]: Started sshd@11-10.0.0.124:22-10.0.0.1:39318.service - OpenSSH per-connection server daemon (10.0.0.1:39318). Dec 12 17:46:46.804063 systemd-logind[1483]: Removed session 11. Dec 12 17:46:46.860341 sshd[5048]: Accepted publickey for core from 10.0.0.1 port 39318 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:46:46.861526 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:46:46.866874 systemd-logind[1483]: New session 12 of user core. Dec 12 17:46:46.879394 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:46:46.925733 containerd[1500]: time="2025-12-12T17:46:46.925692209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:46.932128 containerd[1500]: time="2025-12-12T17:46:46.932080588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:46:46.932228 containerd[1500]: time="2025-12-12T17:46:46.932161908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:46:46.932313 kubelet[2643]: E1212 17:46:46.932275 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:46:46.932570 kubelet[2643]: E1212 17:46:46.932321 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:46:46.932570 kubelet[2643]: E1212 17:46:46.932430 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9b543fe8bed04a25a90bb20d8dd34946,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhvd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79876b7468-t2596_calico-system(a0e276bf-1b01-48a0-beed-8d097ca9c5c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:46.936929 containerd[1500]: time="2025-12-12T17:46:46.936883852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:46:47.033370 sshd[5051]: Connection closed by 10.0.0.1 port 39318 Dec 12 17:46:47.033874 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Dec 12 17:46:47.037258 systemd[1]: sshd@11-10.0.0.124:22-10.0.0.1:39318.service: Deactivated successfully. Dec 12 17:46:47.038953 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:46:47.039648 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:46:47.040654 systemd-logind[1483]: Removed session 12. Dec 12 17:46:47.158555 containerd[1500]: time="2025-12-12T17:46:47.158510649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:47.159479 containerd[1500]: time="2025-12-12T17:46:47.159442365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:46:47.159549 containerd[1500]: time="2025-12-12T17:46:47.159482445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:46:47.159694 kubelet[2643]: E1212 17:46:47.159658 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:46:47.159784 kubelet[2643]: E1212 17:46:47.159708 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:46:47.159881 kubelet[2643]: E1212 17:46:47.159841 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhvd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79876b7468-t2596_calico-system(a0e276bf-1b01-48a0-beed-8d097ca9c5c7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:47.161027 kubelet[2643]: E1212 17:46:47.160958 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79876b7468-t2596" podUID="a0e276bf-1b01-48a0-beed-8d097ca9c5c7" Dec 12 17:46:52.049882 systemd[1]: Started sshd@12-10.0.0.124:22-10.0.0.1:40794.service - OpenSSH per-connection server daemon (10.0.0.1:40794). Dec 12 17:46:52.107481 sshd[5072]: Accepted publickey for core from 10.0.0.1 port 40794 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:46:52.108871 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:46:52.113626 systemd-logind[1483]: New session 13 of user core. Dec 12 17:46:52.119921 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:46:52.260324 sshd[5075]: Connection closed by 10.0.0.1 port 40794 Dec 12 17:46:52.261144 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Dec 12 17:46:52.279077 systemd[1]: sshd@12-10.0.0.124:22-10.0.0.1:40794.service: Deactivated successfully. Dec 12 17:46:52.281272 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:46:52.282151 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:46:52.284591 systemd[1]: Started sshd@13-10.0.0.124:22-10.0.0.1:40804.service - OpenSSH per-connection server daemon (10.0.0.1:40804). Dec 12 17:46:52.285381 systemd-logind[1483]: Removed session 13. Dec 12 17:46:52.337991 sshd[5088]: Accepted publickey for core from 10.0.0.1 port 40804 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:46:52.339178 sshd-session[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:46:52.343788 systemd-logind[1483]: New session 14 of user core. Dec 12 17:46:52.354921 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:46:52.576681 sshd[5091]: Connection closed by 10.0.0.1 port 40804 Dec 12 17:46:52.577030 sshd-session[5088]: pam_unix(sshd:session): session closed for user core Dec 12 17:46:52.589337 systemd[1]: sshd@13-10.0.0.124:22-10.0.0.1:40804.service: Deactivated successfully. Dec 12 17:46:52.592212 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:46:52.593598 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:46:52.597718 systemd-logind[1483]: Removed session 14. Dec 12 17:46:52.600003 systemd[1]: Started sshd@14-10.0.0.124:22-10.0.0.1:40818.service - OpenSSH per-connection server daemon (10.0.0.1:40818). Dec 12 17:46:52.658572 sshd[5103]: Accepted publickey for core from 10.0.0.1 port 40818 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:46:52.659793 sshd-session[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:46:52.664047 systemd-logind[1483]: New session 15 of user core. Dec 12 17:46:52.681966 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:46:53.307482 sshd[5106]: Connection closed by 10.0.0.1 port 40818 Dec 12 17:46:53.308063 sshd-session[5103]: pam_unix(sshd:session): session closed for user core Dec 12 17:46:53.321867 systemd[1]: sshd@14-10.0.0.124:22-10.0.0.1:40818.service: Deactivated successfully. Dec 12 17:46:53.326192 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:46:53.327959 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:46:53.332622 systemd[1]: Started sshd@15-10.0.0.124:22-10.0.0.1:40822.service - OpenSSH per-connection server daemon (10.0.0.1:40822). Dec 12 17:46:53.333905 systemd-logind[1483]: Removed session 15. Dec 12 17:46:53.382565 sshd[5127]: Accepted publickey for core from 10.0.0.1 port 40822 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:46:53.383740 sshd-session[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:46:53.387638 systemd-logind[1483]: New session 16 of user core. Dec 12 17:46:53.396900 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:46:53.716934 sshd[5131]: Connection closed by 10.0.0.1 port 40822 Dec 12 17:46:53.717921 sshd-session[5127]: pam_unix(sshd:session): session closed for user core Dec 12 17:46:53.728620 systemd[1]: sshd@15-10.0.0.124:22-10.0.0.1:40822.service: Deactivated successfully. Dec 12 17:46:53.730163 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:46:53.732807 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:46:53.735487 systemd[1]: Started sshd@16-10.0.0.124:22-10.0.0.1:40832.service - OpenSSH per-connection server daemon (10.0.0.1:40832). Dec 12 17:46:53.740206 systemd-logind[1483]: Removed session 16. Dec 12 17:46:53.759134 containerd[1500]: time="2025-12-12T17:46:53.757976620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:46:53.810958 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 40832 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:46:53.812233 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:46:53.816879 systemd-logind[1483]: New session 17 of user core. Dec 12 17:46:53.831912 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:46:53.964732 sshd[5145]: Connection closed by 10.0.0.1 port 40832 Dec 12 17:46:53.965063 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Dec 12 17:46:53.968858 systemd[1]: sshd@16-10.0.0.124:22-10.0.0.1:40832.service: Deactivated successfully. Dec 12 17:46:53.972225 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:46:53.973320 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:46:53.974457 systemd-logind[1483]: Removed session 17. Dec 12 17:46:54.006384 containerd[1500]: time="2025-12-12T17:46:54.006333552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:54.007353 containerd[1500]: time="2025-12-12T17:46:54.007302869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:46:54.007410 containerd[1500]: time="2025-12-12T17:46:54.007388789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:46:54.007598 kubelet[2643]: E1212 17:46:54.007535 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:46:54.007598 kubelet[2643]: E1212 17:46:54.007593 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:46:54.007934 containerd[1500]: time="2025-12-12T17:46:54.007913947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:46:54.007965 kubelet[2643]: E1212 17:46:54.007876 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47qrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6bc564446d-p8ff5_calico-system(62a80cbc-f140-42e5-895b-a84f9cbc2fab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:54.009435 kubelet[2643]: E1212 17:46:54.009401 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bc564446d-p8ff5" podUID="62a80cbc-f140-42e5-895b-a84f9cbc2fab" Dec 12 17:46:54.208147 containerd[1500]: time="2025-12-12T17:46:54.208076711Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:54.209033 containerd[1500]: time="2025-12-12T17:46:54.208981468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:46:54.209111 containerd[1500]: time="2025-12-12T17:46:54.209036948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:46:54.209259 kubelet[2643]: E1212 17:46:54.209200 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:46:54.209259 kubelet[2643]: E1212 17:46:54.209248 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:46:54.209438 kubelet[2643]: E1212 17:46:54.209373 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vdf99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67b7547c5c-sb784_calico-apiserver(34789b1f-3cfd-4180-8a83-14eb6215f63c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:54.210543 kubelet[2643]: E1212 17:46:54.210491 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7547c5c-sb784" podUID="34789b1f-3cfd-4180-8a83-14eb6215f63c" Dec 12 17:46:54.711361 containerd[1500]: time="2025-12-12T17:46:54.711240851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:46:54.924901 containerd[1500]: time="2025-12-12T17:46:54.924834815Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:54.925738 containerd[1500]: time="2025-12-12T17:46:54.925703332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:46:54.925806 containerd[1500]: time="2025-12-12T17:46:54.925768692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:46:54.925970 kubelet[2643]: E1212 17:46:54.925935 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:46:54.926122 kubelet[2643]: E1212 17:46:54.926061 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:46:54.926325 kubelet[2643]: E1212 17:46:54.926276 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrqfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-g9s5b_calico-system(371f3370-3973-43fa-a344-e5a8edf40083): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:54.927630 kubelet[2643]: E1212 17:46:54.927601 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g9s5b" podUID="371f3370-3973-43fa-a344-e5a8edf40083" Dec 12 17:46:56.707190 containerd[1500]: time="2025-12-12T17:46:56.707152694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:46:56.933863 containerd[1500]: time="2025-12-12T17:46:56.933806992Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:46:56.934737 containerd[1500]: time="2025-12-12T17:46:56.934690749Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:46:56.934809 containerd[1500]: time="2025-12-12T17:46:56.934733349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:46:56.934963 kubelet[2643]: E1212 17:46:56.934899 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:46:56.935313 kubelet[2643]: E1212 17:46:56.934962 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:46:56.935313 kubelet[2643]: E1212 17:46:56.935085 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ckqtn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-67b7547c5c-fcbnc_calico-apiserver(904021b1-af08-4d93-b11e-a0cb970db885): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:46:56.936300 kubelet[2643]: E1212 17:46:56.936269 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7547c5c-fcbnc" podUID="904021b1-af08-4d93-b11e-a0cb970db885" Dec 12 17:46:57.708387 kubelet[2643]: E1212 17:46:57.708323 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79876b7468-t2596" podUID="a0e276bf-1b01-48a0-beed-8d097ca9c5c7" Dec 12 17:46:58.979303 systemd[1]: Started sshd@17-10.0.0.124:22-10.0.0.1:40842.service - OpenSSH per-connection server daemon (10.0.0.1:40842). Dec 12 17:46:59.041907 sshd[5164]: Accepted publickey for core from 10.0.0.1 port 40842 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:46:59.043199 sshd-session[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:46:59.047366 systemd-logind[1483]: New session 18 of user core. Dec 12 17:46:59.059947 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:46:59.173848 sshd[5167]: Connection closed by 10.0.0.1 port 40842 Dec 12 17:46:59.174174 sshd-session[5164]: pam_unix(sshd:session): session closed for user core Dec 12 17:46:59.177739 systemd[1]: sshd@17-10.0.0.124:22-10.0.0.1:40842.service: Deactivated successfully. Dec 12 17:46:59.179450 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:46:59.181423 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:46:59.182560 systemd-logind[1483]: Removed session 18. Dec 12 17:47:00.708925 containerd[1500]: time="2025-12-12T17:47:00.708887124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:47:00.982682 containerd[1500]: time="2025-12-12T17:47:00.982523814Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:47:00.983563 containerd[1500]: time="2025-12-12T17:47:00.983506654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:47:00.983637 containerd[1500]: time="2025-12-12T17:47:00.983586694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:47:00.983840 kubelet[2643]: E1212 17:47:00.983728 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:47:00.983840 kubelet[2643]: E1212 17:47:00.983834 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:47:00.984224 kubelet[2643]: E1212 17:47:00.983950 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qlrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bpcft_calico-system(d7faeb01-da6d-4b13-a976-32d40fd38bd7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:47:00.986015 containerd[1500]: time="2025-12-12T17:47:00.985990654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:47:01.200697 containerd[1500]: time="2025-12-12T17:47:01.200641697Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:47:01.201720 containerd[1500]: time="2025-12-12T17:47:01.201685919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:47:01.201788 containerd[1500]: time="2025-12-12T17:47:01.201717559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:47:01.201925 kubelet[2643]: E1212 17:47:01.201890 2643 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:47:01.201978 kubelet[2643]: E1212 17:47:01.201937 2643 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:47:01.202100 kubelet[2643]: E1212 17:47:01.202058 2643 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6qlrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bpcft_calico-system(d7faeb01-da6d-4b13-a976-32d40fd38bd7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:47:01.203331 kubelet[2643]: E1212 17:47:01.203261 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bpcft" podUID="d7faeb01-da6d-4b13-a976-32d40fd38bd7" Dec 12 17:47:04.187146 systemd[1]: Started sshd@18-10.0.0.124:22-10.0.0.1:34816.service - OpenSSH per-connection server daemon (10.0.0.1:34816). Dec 12 17:47:04.247848 sshd[5216]: Accepted publickey for core from 10.0.0.1 port 34816 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:47:04.249087 sshd-session[5216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:47:04.252615 systemd-logind[1483]: New session 19 of user core. Dec 12 17:47:04.267924 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:47:04.407058 sshd[5219]: Connection closed by 10.0.0.1 port 34816 Dec 12 17:47:04.407579 sshd-session[5216]: pam_unix(sshd:session): session closed for user core Dec 12 17:47:04.410849 systemd[1]: sshd@18-10.0.0.124:22-10.0.0.1:34816.service: Deactivated successfully. Dec 12 17:47:04.412491 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:47:04.414258 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:47:04.415364 systemd-logind[1483]: Removed session 19. Dec 12 17:47:04.708522 kubelet[2643]: E1212 17:47:04.708483 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6bc564446d-p8ff5" podUID="62a80cbc-f140-42e5-895b-a84f9cbc2fab" Dec 12 17:47:05.707677 kubelet[2643]: E1212 17:47:05.707263 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67b7547c5c-sb784" podUID="34789b1f-3cfd-4180-8a83-14eb6215f63c" Dec 12 17:47:07.706781 kubelet[2643]: E1212 17:47:07.706719 2643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-g9s5b" podUID="371f3370-3973-43fa-a344-e5a8edf40083" Dec 12 17:47:09.420139 systemd[1]: Started sshd@19-10.0.0.124:22-10.0.0.1:34830.service - OpenSSH per-connection server daemon (10.0.0.1:34830). Dec 12 17:47:09.486109 sshd[5235]: Accepted publickey for core from 10.0.0.1 port 34830 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:47:09.488089 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:47:09.493066 systemd-logind[1483]: New session 20 of user core. Dec 12 17:47:09.501915 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:47:09.661829 sshd[5238]: Connection closed by 10.0.0.1 port 34830 Dec 12 17:47:09.662370 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Dec 12 17:47:09.667391 systemd[1]: sshd@19-10.0.0.124:22-10.0.0.1:34830.service: Deactivated successfully. Dec 12 17:47:09.670212 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:47:09.671800 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:47:09.673456 systemd-logind[1483]: Removed session 20.