Dec 12 17:37:37.738600 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:37:37.738621 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Dec 12 15:20:48 -00 2025 Dec 12 17:37:37.738631 kernel: KASLR enabled Dec 12 17:37:37.738637 kernel: efi: EFI v2.7 by EDK II Dec 12 17:37:37.738642 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Dec 12 17:37:37.738648 kernel: random: crng init done Dec 12 17:37:37.738654 kernel: secureboot: Secure boot disabled Dec 12 17:37:37.738660 kernel: ACPI: Early table checksum verification disabled Dec 12 17:37:37.738666 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Dec 12 17:37:37.738674 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:37:37.738680 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:37.738685 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:37.738691 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:37.738697 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:37.738704 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:37.738711 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:37.738718 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:37.738724 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:37.738730 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:37:37.738736 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 12 17:37:37.738742 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:37:37.738749 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:37:37.738755 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Dec 12 17:37:37.738761 kernel: Zone ranges: Dec 12 17:37:37.738767 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:37:37.738774 kernel: DMA32 empty Dec 12 17:37:37.738780 kernel: Normal empty Dec 12 17:37:37.738786 kernel: Device empty Dec 12 17:37:37.738792 kernel: Movable zone start for each node Dec 12 17:37:37.738798 kernel: Early memory node ranges Dec 12 17:37:37.738805 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Dec 12 17:37:37.738811 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Dec 12 17:37:37.738817 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Dec 12 17:37:37.738823 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Dec 12 17:37:37.738829 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Dec 12 17:37:37.738835 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Dec 12 17:37:37.738841 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Dec 12 17:37:37.738848 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Dec 12 17:37:37.738854 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Dec 12 17:37:37.738875 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 12 17:37:37.738883 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 12 17:37:37.738890 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 12 17:37:37.738897 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 12 17:37:37.738905 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 12 17:37:37.738911 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 12 17:37:37.738918 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Dec 12 17:37:37.738924 kernel: psci: probing for conduit method from ACPI. Dec 12 17:37:37.738930 kernel: psci: PSCIv1.1 detected in firmware. Dec 12 17:37:37.738937 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:37:37.738943 kernel: psci: Trusted OS migration not required Dec 12 17:37:37.738949 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:37:37.738956 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:37:37.738962 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:37:37.738970 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:37:37.738977 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:37:37.738983 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:37:37.738990 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:37:37.738996 kernel: CPU features: detected: Spectre-v4 Dec 12 17:37:37.739003 kernel: CPU features: detected: Spectre-BHB Dec 12 17:37:37.739009 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:37:37.739016 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:37:37.739022 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:37:37.739029 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:37:37.739035 kernel: alternatives: applying boot alternatives Dec 12 17:37:37.739043 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:37:37.739051 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 12 17:37:37.739057 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:37:37.739063 kernel: Fallback order for Node 0: 0 Dec 12 17:37:37.739070 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Dec 12 17:37:37.739076 kernel: Policy zone: DMA Dec 12 17:37:37.739083 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:37:37.739089 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Dec 12 17:37:37.739095 kernel: software IO TLB: area num 4. Dec 12 17:37:37.739102 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Dec 12 17:37:37.739108 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Dec 12 17:37:37.739114 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:37:37.739122 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:37:37.739129 kernel: rcu: RCU event tracing is enabled. Dec 12 17:37:37.739136 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:37:37.739142 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:37:37.739149 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:37:37.739155 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:37:37.739162 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:37:37.739168 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:37:37.739174 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:37:37.739181 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:37:37.739187 kernel: GICv3: 256 SPIs implemented Dec 12 17:37:37.739195 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:37:37.739201 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:37:37.739207 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:37:37.739214 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:37:37.739220 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:37:37.739226 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:37:37.739233 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:37:37.739239 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:37:37.739246 kernel: GICv3: using LPI property table @0x0000000040130000 Dec 12 17:37:37.739252 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Dec 12 17:37:37.739283 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:37:37.739290 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:37:37.739299 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:37:37.739306 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:37:37.739313 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:37:37.739319 kernel: arm-pv: using stolen time PV Dec 12 17:37:37.739326 kernel: Console: colour dummy device 80x25 Dec 12 17:37:37.739333 kernel: ACPI: Core revision 20240827 Dec 12 17:37:37.739339 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:37:37.739346 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:37:37.739352 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:37:37.739359 kernel: landlock: Up and running. Dec 12 17:37:37.739367 kernel: SELinux: Initializing. Dec 12 17:37:37.739373 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:37:37.739380 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 12 17:37:37.739386 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:37:37.739393 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:37:37.739400 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:37:37.739413 kernel: Remapping and enabling EFI services. Dec 12 17:37:37.739420 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:37:37.739427 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:37:37.739440 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:37:37.739447 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Dec 12 17:37:37.739454 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:37:37.739462 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:37:37.739469 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:37:37.739476 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:37:37.739483 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Dec 12 17:37:37.739490 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:37:37.739498 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:37:37.739504 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:37:37.739511 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:37:37.739518 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Dec 12 17:37:37.739525 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:37:37.739532 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:37:37.739539 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:37:37.739545 kernel: SMP: Total of 4 processors activated. Dec 12 17:37:37.739552 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:37:37.739560 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:37:37.739567 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:37:37.739574 kernel: CPU features: detected: Common not Private translations Dec 12 17:37:37.739581 kernel: CPU features: detected: CRC32 instructions Dec 12 17:37:37.739587 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:37:37.739594 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:37:37.739601 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:37:37.739608 kernel: CPU features: detected: Privileged Access Never Dec 12 17:37:37.739615 kernel: CPU features: detected: RAS Extension Support Dec 12 17:37:37.739621 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:37:37.739629 kernel: alternatives: applying system-wide alternatives Dec 12 17:37:37.739636 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:37:37.739644 kernel: Memory: 2423776K/2572288K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 126176K reserved, 16384K cma-reserved) Dec 12 17:37:37.739650 kernel: devtmpfs: initialized Dec 12 17:37:37.739657 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:37:37.739664 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:37:37.739671 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:37:37.739678 kernel: 0 pages in range for non-PLT usage Dec 12 17:37:37.739686 kernel: 508400 pages in range for PLT usage Dec 12 17:37:37.739693 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:37:37.739699 kernel: SMBIOS 3.0.0 present. Dec 12 17:37:37.739706 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 12 17:37:37.739713 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:37:37.739720 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:37:37.739727 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:37:37.739734 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:37:37.739741 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:37:37.739749 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:37:37.739756 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Dec 12 17:37:37.739763 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:37:37.739769 kernel: cpuidle: using governor menu Dec 12 17:37:37.739776 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:37:37.739783 kernel: ASID allocator initialised with 32768 entries Dec 12 17:37:37.739790 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:37:37.739797 kernel: Serial: AMBA PL011 UART driver Dec 12 17:37:37.739804 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:37:37.739812 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:37:37.739819 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:37:37.739826 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:37:37.739833 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:37:37.739839 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:37:37.739846 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:37:37.739853 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:37:37.739860 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:37:37.739866 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:37:37.739873 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:37:37.739882 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:37:37.739888 kernel: ACPI: Interpreter enabled Dec 12 17:37:37.739895 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:37:37.739902 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:37:37.739909 kernel: ACPI: CPU0 has been hot-added Dec 12 17:37:37.739915 kernel: ACPI: CPU1 has been hot-added Dec 12 17:37:37.739922 kernel: ACPI: CPU2 has been hot-added Dec 12 17:37:37.739929 kernel: ACPI: CPU3 has been hot-added Dec 12 17:37:37.739935 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:37:37.739944 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:37:37.739950 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:37:37.740075 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:37:37.740137 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:37:37.740195 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:37:37.740250 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:37:37.740348 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:37:37.740362 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:37:37.740369 kernel: PCI host bridge to bus 0000:00 Dec 12 17:37:37.740444 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:37:37.740497 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:37:37.740548 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:37:37.740598 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:37:37.740686 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:37:37.740757 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Dec 12 17:37:37.740819 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Dec 12 17:37:37.740876 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Dec 12 17:37:37.740933 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:37:37.740990 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:37:37.741047 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Dec 12 17:37:37.741107 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Dec 12 17:37:37.741159 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:37:37.741222 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:37:37.741293 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:37:37.741302 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:37:37.741309 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:37:37.741316 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:37:37.741323 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:37:37.741333 kernel: iommu: Default domain type: Translated Dec 12 17:37:37.741340 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:37:37.741347 kernel: efivars: Registered efivars operations Dec 12 17:37:37.741354 kernel: vgaarb: loaded Dec 12 17:37:37.741361 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:37:37.741368 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:37:37.741375 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:37:37.741382 kernel: pnp: PnP ACPI init Dec 12 17:37:37.741461 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:37:37.741474 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:37:37.741481 kernel: NET: Registered PF_INET protocol family Dec 12 17:37:37.741489 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 12 17:37:37.741496 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 12 17:37:37.741503 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:37:37.741510 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:37:37.741518 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 12 17:37:37.741525 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 12 17:37:37.741532 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:37:37.741541 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 12 17:37:37.741548 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:37:37.741556 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:37:37.741563 kernel: kvm [1]: HYP mode not available Dec 12 17:37:37.741570 kernel: Initialise system trusted keyrings Dec 12 17:37:37.741577 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 12 17:37:37.741584 kernel: Key type asymmetric registered Dec 12 17:37:37.741591 kernel: Asymmetric key parser 'x509' registered Dec 12 17:37:37.741598 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:37:37.741606 kernel: io scheduler mq-deadline registered Dec 12 17:37:37.741613 kernel: io scheduler kyber registered Dec 12 17:37:37.741620 kernel: io scheduler bfq registered Dec 12 17:37:37.741627 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:37:37.741634 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:37:37.741641 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:37:37.741700 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 12 17:37:37.741710 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:37:37.741717 kernel: thunder_xcv, ver 1.0 Dec 12 17:37:37.741725 kernel: thunder_bgx, ver 1.0 Dec 12 17:37:37.741732 kernel: nicpf, ver 1.0 Dec 12 17:37:37.741739 kernel: nicvf, ver 1.0 Dec 12 17:37:37.741803 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:37:37.741858 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:37:37 UTC (1765561057) Dec 12 17:37:37.741867 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:37:37.741874 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:37:37.741881 kernel: watchdog: NMI not fully supported Dec 12 17:37:37.741890 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:37:37.741897 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:37:37.741904 kernel: Segment Routing with IPv6 Dec 12 17:37:37.741910 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:37:37.741917 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:37:37.741924 kernel: Key type dns_resolver registered Dec 12 17:37:37.741931 kernel: registered taskstats version 1 Dec 12 17:37:37.741937 kernel: Loading compiled-in X.509 certificates Dec 12 17:37:37.741944 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 92f3a94fb747a7ba7cbcfde1535be91b86f9429a' Dec 12 17:37:37.741953 kernel: Demotion targets for Node 0: null Dec 12 17:37:37.741959 kernel: Key type .fscrypt registered Dec 12 17:37:37.741966 kernel: Key type fscrypt-provisioning registered Dec 12 17:37:37.741973 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:37:37.741980 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:37:37.741986 kernel: ima: No architecture policies found Dec 12 17:37:37.741993 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:37:37.742000 kernel: clk: Disabling unused clocks Dec 12 17:37:37.742006 kernel: PM: genpd: Disabling unused power domains Dec 12 17:37:37.742014 kernel: Warning: unable to open an initial console. Dec 12 17:37:37.742021 kernel: Freeing unused kernel memory: 39552K Dec 12 17:37:37.742028 kernel: Run /init as init process Dec 12 17:37:37.742035 kernel: with arguments: Dec 12 17:37:37.742042 kernel: /init Dec 12 17:37:37.742049 kernel: with environment: Dec 12 17:37:37.742055 kernel: HOME=/ Dec 12 17:37:37.742062 kernel: TERM=linux Dec 12 17:37:37.742069 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:37:37.742081 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:37:37.742088 systemd[1]: Detected virtualization kvm. Dec 12 17:37:37.742096 systemd[1]: Detected architecture arm64. Dec 12 17:37:37.742103 systemd[1]: Running in initrd. Dec 12 17:37:37.742110 systemd[1]: No hostname configured, using default hostname. Dec 12 17:37:37.742117 systemd[1]: Hostname set to . Dec 12 17:37:37.742125 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:37:37.742133 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:37:37.742141 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:37:37.742148 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:37:37.742156 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:37:37.742164 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:37:37.742172 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:37:37.742180 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:37:37.742189 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 12 17:37:37.742197 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 12 17:37:37.742205 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:37:37.742212 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:37:37.742219 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:37:37.742227 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:37:37.742234 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:37:37.742241 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:37:37.742250 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:37:37.742266 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:37:37.742274 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:37:37.742292 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:37:37.742300 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:37:37.742307 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:37:37.742315 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:37:37.742322 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:37:37.742329 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:37:37.742339 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:37:37.742347 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:37:37.742355 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:37:37.742362 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:37:37.742369 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:37:37.742376 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:37:37.742384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:37:37.742391 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:37:37.742400 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:37:37.742414 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:37:37.742438 systemd-journald[245]: Collecting audit messages is disabled. Dec 12 17:37:37.742458 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:37:37.742466 systemd-journald[245]: Journal started Dec 12 17:37:37.742483 systemd-journald[245]: Runtime Journal (/run/log/journal/fa6343ad9f914ff2ad4ada8ddf2d4f22) is 6M, max 48.5M, 42.4M free. Dec 12 17:37:37.751414 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:37:37.751445 kernel: Bridge firewalling registered Dec 12 17:37:37.734710 systemd-modules-load[246]: Inserted module 'overlay' Dec 12 17:37:37.752959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:37:37.748997 systemd-modules-load[246]: Inserted module 'br_netfilter' Dec 12 17:37:37.756297 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:37:37.756715 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:37:37.759295 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:37:37.762788 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:37:37.764377 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:37:37.767373 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:37:37.779835 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:37:37.787453 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:37:37.788507 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:37:37.792488 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:37:37.794694 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:37:37.795914 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:37:37.798690 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:37:37.800778 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:37:37.823760 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=361f5baddf90aee3bc7ee7e9be879bc0cc94314f224faa1e2791d9b44cd3ec52 Dec 12 17:37:37.839851 systemd-resolved[290]: Positive Trust Anchors: Dec 12 17:37:37.839868 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:37:37.839900 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:37:37.844741 systemd-resolved[290]: Defaulting to hostname 'linux'. Dec 12 17:37:37.845793 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:37:37.848929 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:37:37.897293 kernel: SCSI subsystem initialized Dec 12 17:37:37.902276 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:37:37.909288 kernel: iscsi: registered transport (tcp) Dec 12 17:37:37.922286 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:37:37.922307 kernel: QLogic iSCSI HBA Driver Dec 12 17:37:37.938348 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:37:37.961740 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:37:37.963743 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:37:38.006689 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:37:38.008894 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:37:38.075317 kernel: raid6: neonx8 gen() 15663 MB/s Dec 12 17:37:38.092279 kernel: raid6: neonx4 gen() 15594 MB/s Dec 12 17:37:38.109283 kernel: raid6: neonx2 gen() 13132 MB/s Dec 12 17:37:38.126301 kernel: raid6: neonx1 gen() 10372 MB/s Dec 12 17:37:38.143300 kernel: raid6: int64x8 gen() 6852 MB/s Dec 12 17:37:38.160288 kernel: raid6: int64x4 gen() 7313 MB/s Dec 12 17:37:38.177291 kernel: raid6: int64x2 gen() 6052 MB/s Dec 12 17:37:38.194289 kernel: raid6: int64x1 gen() 5014 MB/s Dec 12 17:37:38.194312 kernel: raid6: using algorithm neonx8 gen() 15663 MB/s Dec 12 17:37:38.211293 kernel: raid6: .... xor() 11912 MB/s, rmw enabled Dec 12 17:37:38.211324 kernel: raid6: using neon recovery algorithm Dec 12 17:37:38.216548 kernel: xor: measuring software checksum speed Dec 12 17:37:38.216573 kernel: 8regs : 21254 MB/sec Dec 12 17:37:38.217757 kernel: 32regs : 20950 MB/sec Dec 12 17:37:38.217777 kernel: arm64_neon : 28089 MB/sec Dec 12 17:37:38.217786 kernel: xor: using function: arm64_neon (28089 MB/sec) Dec 12 17:37:38.272452 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:37:38.278602 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:37:38.283072 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:37:38.316763 systemd-udevd[498]: Using default interface naming scheme 'v255'. Dec 12 17:37:38.320955 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:37:38.322909 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:37:38.345361 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Dec 12 17:37:38.371877 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:37:38.374355 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:37:38.436501 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:37:38.438955 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:37:38.497284 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 12 17:37:38.497983 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 12 17:37:38.507629 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:37:38.507678 kernel: GPT:9289727 != 19775487 Dec 12 17:37:38.507688 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:37:38.507705 kernel: GPT:9289727 != 19775487 Dec 12 17:37:38.508598 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:37:38.509562 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:37:38.517143 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:37:38.517294 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:37:38.520604 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:37:38.522716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:37:38.554330 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:37:38.555678 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:37:38.564753 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:37:38.570990 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:37:38.583887 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:37:38.590208 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:37:38.591742 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 12 17:37:38.595899 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:37:38.601505 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:37:38.603533 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:37:38.606095 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:37:38.607955 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:37:38.628304 disk-uuid[590]: Primary Header is updated. Dec 12 17:37:38.628304 disk-uuid[590]: Secondary Entries is updated. Dec 12 17:37:38.628304 disk-uuid[590]: Secondary Header is updated. Dec 12 17:37:38.632708 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:37:38.634720 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:37:39.642275 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:37:39.642932 disk-uuid[596]: The operation has completed successfully. Dec 12 17:37:39.677324 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:37:39.677431 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:37:39.697525 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 12 17:37:39.726849 sh[609]: Success Dec 12 17:37:39.739361 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:37:39.739433 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:37:39.740499 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:37:39.748279 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:37:39.777051 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:37:39.779665 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 12 17:37:39.790288 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 12 17:37:39.798289 kernel: BTRFS: device fsid 6d6d314d-b8a1-4727-8a34-8525e276a248 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (621) Dec 12 17:37:39.800425 kernel: BTRFS info (device dm-0): first mount of filesystem 6d6d314d-b8a1-4727-8a34-8525e276a248 Dec 12 17:37:39.800452 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:37:39.805279 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:37:39.805305 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:37:39.806432 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 12 17:37:39.807605 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:37:39.808876 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:37:39.809654 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:37:39.811102 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:37:39.835288 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Dec 12 17:37:39.837957 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:37:39.837992 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:37:39.841560 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:37:39.841592 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:37:39.846043 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:37:39.847794 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:37:39.848440 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:37:39.908845 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:37:39.911566 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:37:39.948460 systemd-networkd[794]: lo: Link UP Dec 12 17:37:39.948476 systemd-networkd[794]: lo: Gained carrier Dec 12 17:37:39.949168 systemd-networkd[794]: Enumeration completed Dec 12 17:37:39.949476 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:37:39.949672 systemd-networkd[794]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:37:39.949676 systemd-networkd[794]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:37:39.951118 systemd-networkd[794]: eth0: Link UP Dec 12 17:37:39.951209 systemd-networkd[794]: eth0: Gained carrier Dec 12 17:37:39.951218 systemd-networkd[794]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:37:39.951716 systemd[1]: Reached target network.target - Network. Dec 12 17:37:39.964193 ignition[697]: Ignition 2.22.0 Dec 12 17:37:39.964207 ignition[697]: Stage: fetch-offline Dec 12 17:37:39.964239 ignition[697]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:37:39.964247 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:37:39.965335 ignition[697]: parsed url from cmdline: "" Dec 12 17:37:39.965340 ignition[697]: no config URL provided Dec 12 17:37:39.965347 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:37:39.965357 ignition[697]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:37:39.965380 ignition[697]: op(1): [started] loading QEMU firmware config module Dec 12 17:37:39.965385 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 12 17:37:39.975486 ignition[697]: op(1): [finished] loading QEMU firmware config module Dec 12 17:37:39.976309 systemd-networkd[794]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:37:40.019824 ignition[697]: parsing config with SHA512: e6239a5c0719511dca7a7061a0b9d52b609db7d69a8b3ebbf327671872f8892bad4e90084ca6a42b83d108d51eda813cd3f9d45367bc7845d7ee16a33334e32a Dec 12 17:37:40.024842 unknown[697]: fetched base config from "system" Dec 12 17:37:40.024856 unknown[697]: fetched user config from "qemu" Dec 12 17:37:40.025297 ignition[697]: fetch-offline: fetch-offline passed Dec 12 17:37:40.025353 ignition[697]: Ignition finished successfully Dec 12 17:37:40.028614 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:37:40.029762 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 12 17:37:40.030460 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:37:40.067365 ignition[810]: Ignition 2.22.0 Dec 12 17:37:40.067376 ignition[810]: Stage: kargs Dec 12 17:37:40.067534 ignition[810]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:37:40.067543 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:37:40.068370 ignition[810]: kargs: kargs passed Dec 12 17:37:40.068424 ignition[810]: Ignition finished successfully Dec 12 17:37:40.075326 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:37:40.078119 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:37:40.126137 ignition[818]: Ignition 2.22.0 Dec 12 17:37:40.126154 ignition[818]: Stage: disks Dec 12 17:37:40.126330 ignition[818]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:37:40.126339 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:37:40.127364 ignition[818]: disks: disks passed Dec 12 17:37:40.129126 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:37:40.127422 ignition[818]: Ignition finished successfully Dec 12 17:37:40.130508 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:37:40.133372 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:37:40.134820 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:37:40.136386 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:37:40.137995 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:37:40.140331 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:37:40.169702 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Dec 12 17:37:40.174837 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:37:40.177366 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:37:40.255288 kernel: EXT4-fs (vda9): mounted filesystem 895d7845-d0e8-43ae-a778-7804b473b868 r/w with ordered data mode. Quota mode: none. Dec 12 17:37:40.255703 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:37:40.256810 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:37:40.259871 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:37:40.263178 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:37:40.264092 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:37:40.264135 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:37:40.264157 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:37:40.276834 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:37:40.279512 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:37:40.284174 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Dec 12 17:37:40.284228 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:37:40.284254 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:37:40.287508 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:37:40.287550 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:37:40.291488 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:37:40.316468 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:37:40.320314 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:37:40.323205 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:37:40.326952 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:37:40.400635 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:37:40.402583 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:37:40.405496 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:37:40.426321 kernel: BTRFS info (device vda6): last unmount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:37:40.442410 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:37:40.458782 ignition[950]: INFO : Ignition 2.22.0 Dec 12 17:37:40.458782 ignition[950]: INFO : Stage: mount Dec 12 17:37:40.461388 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:37:40.461388 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:37:40.461388 ignition[950]: INFO : mount: mount passed Dec 12 17:37:40.461388 ignition[950]: INFO : Ignition finished successfully Dec 12 17:37:40.462182 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:37:40.464885 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:37:40.798132 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:37:40.799624 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:37:40.818302 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (962) Dec 12 17:37:40.821465 kernel: BTRFS info (device vda6): first mount of filesystem 4b8ce5a5-a2aa-4c44-bc9b-80e30d06d25f Dec 12 17:37:40.821547 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:37:40.825467 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:37:40.825511 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:37:40.827241 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:37:40.871532 ignition[979]: INFO : Ignition 2.22.0 Dec 12 17:37:40.871532 ignition[979]: INFO : Stage: files Dec 12 17:37:40.873091 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:37:40.873091 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:37:40.875337 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:37:40.876961 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:37:40.876961 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:37:40.880218 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:37:40.881578 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:37:40.881578 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:37:40.880809 unknown[979]: wrote ssh authorized keys file for user: core Dec 12 17:37:40.885903 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:37:40.885903 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:37:40.980712 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:37:41.113191 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:37:41.113191 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:37:41.116918 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:37:41.116918 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:37:41.116918 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:37:41.116918 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:37:41.116918 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:37:41.116918 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:37:41.116918 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:37:41.127583 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:37:41.127583 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:37:41.127583 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:37:41.127583 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:37:41.127583 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:37:41.127583 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 12 17:37:41.550529 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:37:41.662813 systemd-networkd[794]: eth0: Gained IPv6LL Dec 12 17:37:41.779483 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:37:41.779483 ignition[979]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:37:41.782941 ignition[979]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:37:41.782941 ignition[979]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:37:41.782941 ignition[979]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:37:41.782941 ignition[979]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 12 17:37:41.782941 ignition[979]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:37:41.782941 ignition[979]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 12 17:37:41.782941 ignition[979]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 12 17:37:41.782941 ignition[979]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 12 17:37:41.797755 ignition[979]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:37:41.801728 ignition[979]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 12 17:37:41.804483 ignition[979]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 12 17:37:41.804483 ignition[979]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:37:41.804483 ignition[979]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:37:41.804483 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:37:41.804483 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:37:41.804483 ignition[979]: INFO : files: files passed Dec 12 17:37:41.804483 ignition[979]: INFO : Ignition finished successfully Dec 12 17:37:41.806539 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:37:41.809080 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:37:41.821413 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:37:41.824901 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:37:41.825122 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:37:41.830573 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory Dec 12 17:37:41.834100 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:37:41.834100 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:37:41.838651 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:37:41.837529 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:37:41.839860 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:37:41.843427 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:37:41.884443 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:37:41.884547 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:37:41.886539 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:37:41.888110 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:37:41.889742 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:37:41.890563 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:37:41.904076 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:37:41.906405 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:37:41.926912 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:37:41.928094 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:37:41.929854 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:37:41.931340 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:37:41.931485 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:37:41.933566 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:37:41.935165 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:37:41.936546 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:37:41.937976 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:37:41.939572 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:37:41.941216 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:37:41.943183 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:37:41.944888 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:37:41.946687 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:37:41.948423 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:37:41.949974 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:37:41.951352 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:37:41.951496 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:37:41.953611 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:37:41.955273 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:37:41.957048 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:37:41.960342 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:37:41.961452 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:37:41.961573 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:37:41.963985 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:37:41.964102 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:37:41.965996 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:37:41.967375 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:37:41.968163 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:37:41.969422 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:37:41.970768 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:37:41.972568 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:37:41.972647 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:37:41.974354 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:37:41.974447 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:37:41.975909 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:37:41.976029 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:37:41.977604 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:37:41.977715 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:37:41.979769 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:37:41.981084 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:37:41.981213 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:37:41.983711 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:37:41.984487 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:37:41.984619 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:37:41.986482 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:37:41.986584 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:37:41.991638 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:37:41.999439 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:37:42.008077 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:37:42.015822 ignition[1035]: INFO : Ignition 2.22.0 Dec 12 17:37:42.015822 ignition[1035]: INFO : Stage: umount Dec 12 17:37:42.018554 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:37:42.018554 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 12 17:37:42.018554 ignition[1035]: INFO : umount: umount passed Dec 12 17:37:42.018554 ignition[1035]: INFO : Ignition finished successfully Dec 12 17:37:42.019753 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:37:42.019857 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:37:42.025957 systemd[1]: Stopped target network.target - Network. Dec 12 17:37:42.027039 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:37:42.027114 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:37:42.028931 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:37:42.028976 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:37:42.030353 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:37:42.030410 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:37:42.031895 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:37:42.031932 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:37:42.033557 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:37:42.035331 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:37:42.041718 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:37:42.041820 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:37:42.046322 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 12 17:37:42.046537 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:37:42.046623 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:37:42.049711 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 12 17:37:42.050239 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:37:42.052307 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:37:42.052345 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:37:42.055731 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:37:42.056754 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:37:42.056835 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:37:42.058692 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:37:42.058738 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:37:42.061385 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:37:42.061442 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:37:42.063114 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:37:42.063158 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:37:42.066933 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:37:42.071041 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 12 17:37:42.071097 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:37:42.071485 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:37:42.071561 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:37:42.075239 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:37:42.075584 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:37:42.080803 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:37:42.081386 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:37:42.083641 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:37:42.083720 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:37:42.085594 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:37:42.085680 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:37:42.086846 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:37:42.086879 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:37:42.088319 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:37:42.088368 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:37:42.090778 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:37:42.090827 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:37:42.093182 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:37:42.093231 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:37:42.096503 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:37:42.097449 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:37:42.097503 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:37:42.100005 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:37:42.100042 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:37:42.103061 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:37:42.103104 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:37:42.107248 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 12 17:37:42.107316 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 12 17:37:42.107349 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 12 17:37:42.114404 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:37:42.116285 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:37:42.117447 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:37:42.119686 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:37:42.140249 systemd[1]: Switching root. Dec 12 17:37:42.174373 systemd-journald[245]: Journal stopped Dec 12 17:37:42.946768 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Dec 12 17:37:42.946827 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:37:42.946839 kernel: SELinux: policy capability open_perms=1 Dec 12 17:37:42.946849 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:37:42.946861 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:37:42.946872 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:37:42.946881 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:37:42.946892 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:37:42.946903 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:37:42.946912 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:37:42.946922 kernel: audit: type=1403 audit(1765561062.343:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 12 17:37:42.946937 systemd[1]: Successfully loaded SELinux policy in 59.219ms. Dec 12 17:37:42.946953 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.448ms. Dec 12 17:37:42.946965 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:37:42.946976 systemd[1]: Detected virtualization kvm. Dec 12 17:37:42.946986 systemd[1]: Detected architecture arm64. Dec 12 17:37:42.946996 systemd[1]: Detected first boot. Dec 12 17:37:42.947009 systemd[1]: Initializing machine ID from VM UUID. Dec 12 17:37:42.947020 zram_generator::config[1083]: No configuration found. Dec 12 17:37:42.947031 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:37:42.947041 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:37:42.947052 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 12 17:37:42.947063 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:37:42.947073 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:37:42.947084 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:37:42.947097 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:37:42.947108 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:37:42.947118 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:37:42.947129 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:37:42.947140 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:37:42.947150 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:37:42.947165 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:37:42.947178 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:37:42.947190 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:37:42.947200 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:37:42.947211 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:37:42.947221 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:37:42.947232 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:37:42.947243 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:37:42.947253 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:37:42.947502 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:37:42.947524 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:37:42.947535 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:37:42.947547 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:37:42.947557 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:37:42.947568 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:37:42.947578 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:37:42.947589 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:37:42.947601 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:37:42.947617 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:37:42.947628 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:37:42.947640 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:37:42.947652 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:37:42.947664 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:37:42.947674 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:37:42.947685 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:37:42.947696 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:37:42.947706 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:37:42.947719 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:37:42.947731 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:37:42.947742 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:37:42.947753 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:37:42.947763 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:37:42.947774 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:37:42.947786 systemd[1]: Reached target machines.target - Containers. Dec 12 17:37:42.947796 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:37:42.947807 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:37:42.947817 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:37:42.947829 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:37:42.947839 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:37:42.947849 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:37:42.947860 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:37:42.947870 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:37:42.947880 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:37:42.947891 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:37:42.947902 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:37:42.947913 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:37:42.947924 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:37:42.947935 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:37:42.947953 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:37:42.947967 kernel: fuse: init (API version 7.41) Dec 12 17:37:42.947977 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:37:42.947988 kernel: loop: module loaded Dec 12 17:37:42.947998 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:37:42.948009 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:37:42.948020 kernel: ACPI: bus type drm_connector registered Dec 12 17:37:42.948030 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:37:42.948041 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:37:42.948051 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:37:42.948062 systemd[1]: verity-setup.service: Deactivated successfully. Dec 12 17:37:42.948073 systemd[1]: Stopped verity-setup.service. Dec 12 17:37:42.948115 systemd-journald[1151]: Collecting audit messages is disabled. Dec 12 17:37:42.948139 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:37:42.948151 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:37:42.948162 systemd-journald[1151]: Journal started Dec 12 17:37:42.948184 systemd-journald[1151]: Runtime Journal (/run/log/journal/fa6343ad9f914ff2ad4ada8ddf2d4f22) is 6M, max 48.5M, 42.4M free. Dec 12 17:37:42.731125 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:37:42.752302 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:37:42.752702 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:37:42.950829 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:37:42.951525 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:37:42.952539 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:37:42.953559 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:37:42.954861 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:37:42.957301 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:37:42.958563 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:37:42.959936 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:37:42.960123 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:37:42.961429 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:37:42.961592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:37:42.962786 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:37:42.962957 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:37:42.964309 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:37:42.964476 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:37:42.965692 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:37:42.965870 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:37:42.967068 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:37:42.967222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:37:42.968599 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:37:42.969840 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:37:42.971238 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:37:42.972836 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:37:42.984776 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:37:42.987006 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:37:42.988990 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:37:42.990019 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:37:42.990057 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:37:42.991886 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:37:43.000059 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:37:43.001109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:37:43.002183 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:37:43.004338 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:37:43.005506 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:37:43.006701 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:37:43.007702 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:37:43.015405 systemd-journald[1151]: Time spent on flushing to /var/log/journal/fa6343ad9f914ff2ad4ada8ddf2d4f22 is 25.564ms for 883 entries. Dec 12 17:37:43.015405 systemd-journald[1151]: System Journal (/var/log/journal/fa6343ad9f914ff2ad4ada8ddf2d4f22) is 8M, max 195.6M, 187.6M free. Dec 12 17:37:43.058443 systemd-journald[1151]: Received client request to flush runtime journal. Dec 12 17:37:43.058500 kernel: loop0: detected capacity change from 0 to 119840 Dec 12 17:37:43.058517 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:37:43.011406 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:37:43.013323 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:37:43.015559 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:37:43.019298 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:37:43.020896 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:37:43.022285 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:37:43.030860 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:37:43.032362 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:37:43.035459 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:37:43.050093 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:37:43.051872 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:37:43.054801 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:37:43.071296 kernel: loop1: detected capacity change from 0 to 100632 Dec 12 17:37:43.071240 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:37:43.075181 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:37:43.088580 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Dec 12 17:37:43.088598 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Dec 12 17:37:43.091828 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:37:43.104279 kernel: loop2: detected capacity change from 0 to 200800 Dec 12 17:37:43.145310 kernel: loop3: detected capacity change from 0 to 119840 Dec 12 17:37:43.152276 kernel: loop4: detected capacity change from 0 to 100632 Dec 12 17:37:43.156279 kernel: loop5: detected capacity change from 0 to 200800 Dec 12 17:37:43.160589 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 12 17:37:43.160965 (sd-merge)[1223]: Merged extensions into '/usr'. Dec 12 17:37:43.164698 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:37:43.164715 systemd[1]: Reloading... Dec 12 17:37:43.217289 zram_generator::config[1253]: No configuration found. Dec 12 17:37:43.281597 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:37:43.381151 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:37:43.381425 systemd[1]: Reloading finished in 216 ms. Dec 12 17:37:43.416307 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:37:43.417646 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:37:43.432616 systemd[1]: Starting ensure-sysext.service... Dec 12 17:37:43.434291 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:37:43.443238 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:37:43.443254 systemd[1]: Reloading... Dec 12 17:37:43.447437 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:37:43.447467 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:37:43.447694 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:37:43.447869 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 12 17:37:43.448584 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 12 17:37:43.448778 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Dec 12 17:37:43.448829 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Dec 12 17:37:43.451648 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:37:43.451661 systemd-tmpfiles[1287]: Skipping /boot Dec 12 17:37:43.457565 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:37:43.457582 systemd-tmpfiles[1287]: Skipping /boot Dec 12 17:37:43.495372 zram_generator::config[1317]: No configuration found. Dec 12 17:37:43.637502 systemd[1]: Reloading finished in 193 ms. Dec 12 17:37:43.660466 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:37:43.676024 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:37:43.683680 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:37:43.686251 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:37:43.695727 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:37:43.698586 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:37:43.703536 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:37:43.707047 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:37:43.712098 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:37:43.715528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:37:43.717610 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:37:43.721044 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:37:43.722068 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:37:43.722189 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:37:43.723903 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:37:43.726041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:37:43.727300 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:37:43.728659 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:37:43.728807 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:37:43.730538 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:37:43.730699 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:37:43.740122 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:37:43.744554 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:37:43.747062 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Dec 12 17:37:43.748508 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:37:43.749283 augenrules[1385]: No rules Dec 12 17:37:43.750729 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:37:43.751776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:37:43.751904 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:37:43.752877 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:37:43.753438 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:37:43.761617 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:37:43.763572 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:37:43.765208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:37:43.765418 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:37:43.766881 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:37:43.767092 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:37:43.771503 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:37:43.773791 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:37:43.775921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:37:43.776314 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:37:43.784464 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:37:43.796745 systemd[1]: Finished ensure-sysext.service. Dec 12 17:37:43.800501 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:37:43.801676 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:37:43.803969 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:37:43.805897 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:37:43.807776 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:37:43.815007 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:37:43.816028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:37:43.816082 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:37:43.818651 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:37:43.821349 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 12 17:37:43.826607 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:37:43.827563 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:37:43.829431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:37:43.829606 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:37:43.830955 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:37:43.831108 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:37:43.832367 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:37:43.832519 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:37:43.833813 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:37:43.834008 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:37:43.843486 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:37:43.843551 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:37:43.845890 augenrules[1431]: /sbin/augenrules: No change Dec 12 17:37:43.850784 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:37:43.858226 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:37:43.859461 augenrules[1462]: No rules Dec 12 17:37:43.862483 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:37:43.862769 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:37:43.906927 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:37:43.911456 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:37:43.946222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:37:43.953726 systemd-networkd[1436]: lo: Link UP Dec 12 17:37:43.953737 systemd-networkd[1436]: lo: Gained carrier Dec 12 17:37:43.954729 systemd-networkd[1436]: Enumeration completed Dec 12 17:37:43.954849 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:37:43.957551 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:37:43.959003 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:37:43.959017 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:37:43.960581 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:37:43.962083 systemd-networkd[1436]: eth0: Link UP Dec 12 17:37:43.962214 systemd-networkd[1436]: eth0: Gained carrier Dec 12 17:37:43.962243 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 12 17:37:43.969562 systemd-resolved[1353]: Positive Trust Anchors: Dec 12 17:37:43.969585 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:37:43.969618 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:37:43.977366 systemd-networkd[1436]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 12 17:37:43.978764 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 12 17:37:43.979173 systemd-resolved[1353]: Defaulting to hostname 'linux'. Dec 12 17:37:43.980163 systemd-timesyncd[1438]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 12 17:37:43.980218 systemd-timesyncd[1438]: Initial clock synchronization to Fri 2025-12-12 17:37:43.737403 UTC. Dec 12 17:37:43.980524 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:37:43.982389 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:37:43.983755 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:37:43.985588 systemd[1]: Reached target network.target - Network. Dec 12 17:37:43.986690 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:37:43.987708 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:37:43.988702 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:37:43.989720 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:37:43.990921 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:37:43.991911 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:37:43.992966 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:37:43.994088 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:37:43.994124 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:37:43.994932 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:37:43.996498 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:37:43.999278 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:37:44.003158 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:37:44.005563 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:37:44.006593 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:37:44.012174 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:37:44.013439 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:37:44.016060 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:37:44.017666 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:37:44.019416 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:37:44.021378 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:37:44.021411 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:37:44.023566 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:37:44.026617 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:37:44.028592 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:37:44.032506 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:37:44.045184 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:37:44.046423 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:37:44.048532 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:37:44.049629 jq[1499]: false Dec 12 17:37:44.052524 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:37:44.055952 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:37:44.060485 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:37:44.064584 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:37:44.066348 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:37:44.068563 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:37:44.070124 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:37:44.074445 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:37:44.077237 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:37:44.081635 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:37:44.081848 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:37:44.082102 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:37:44.082323 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:37:44.082949 extend-filesystems[1502]: Found /dev/vda6 Dec 12 17:37:44.084143 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:37:44.084395 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:37:44.085288 extend-filesystems[1502]: Found /dev/vda9 Dec 12 17:37:44.087610 extend-filesystems[1502]: Checking size of /dev/vda9 Dec 12 17:37:44.097579 extend-filesystems[1502]: Resized partition /dev/vda9 Dec 12 17:37:44.099544 (ntainerd)[1525]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 12 17:37:44.105611 extend-filesystems[1538]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:37:44.108224 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:37:44.113013 tar[1523]: linux-arm64/LICENSE Dec 12 17:37:44.113013 tar[1523]: linux-arm64/helm Dec 12 17:37:44.117269 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 12 17:37:44.124956 jq[1518]: true Dec 12 17:37:44.131121 dbus-daemon[1497]: [system] SELinux support is enabled Dec 12 17:37:44.131423 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:37:44.139861 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:37:44.139890 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:37:44.142450 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:37:44.142480 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:37:44.154321 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 12 17:37:44.156897 update_engine[1513]: I20251212 17:37:44.156683 1513 main.cc:92] Flatcar Update Engine starting Dec 12 17:37:44.160801 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:37:44.167183 update_engine[1513]: I20251212 17:37:44.160748 1513 update_check_scheduler.cc:74] Next update check in 10m22s Dec 12 17:37:44.163956 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:37:44.167282 jq[1544]: true Dec 12 17:37:44.168178 extend-filesystems[1538]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:37:44.168178 extend-filesystems[1538]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 12 17:37:44.168178 extend-filesystems[1538]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 12 17:37:44.171720 extend-filesystems[1502]: Resized filesystem in /dev/vda9 Dec 12 17:37:44.172551 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:37:44.174443 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:37:44.209943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:37:44.232992 bash[1576]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:37:44.234294 systemd-logind[1509]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:37:44.234812 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:37:44.236692 systemd-logind[1509]: New seat seat0. Dec 12 17:37:44.239432 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 12 17:37:44.239827 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:37:44.243742 locksmithd[1546]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:37:44.297316 containerd[1525]: time="2025-12-12T17:37:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:37:44.298509 containerd[1525]: time="2025-12-12T17:37:44.298455117Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 12 17:37:44.310278 containerd[1525]: time="2025-12-12T17:37:44.310094195Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.657µs" Dec 12 17:37:44.310278 containerd[1525]: time="2025-12-12T17:37:44.310136395Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:37:44.310278 containerd[1525]: time="2025-12-12T17:37:44.310157650Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:37:44.310522 containerd[1525]: time="2025-12-12T17:37:44.310496412Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:37:44.310582 containerd[1525]: time="2025-12-12T17:37:44.310569525Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:37:44.310646 containerd[1525]: time="2025-12-12T17:37:44.310632243Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:37:44.310775 containerd[1525]: time="2025-12-12T17:37:44.310755080Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:37:44.310839 containerd[1525]: time="2025-12-12T17:37:44.310825322Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:37:44.313385 containerd[1525]: time="2025-12-12T17:37:44.312631575Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:37:44.313716 containerd[1525]: time="2025-12-12T17:37:44.313544611Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:37:44.313788 containerd[1525]: time="2025-12-12T17:37:44.313772172Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:37:44.313835 containerd[1525]: time="2025-12-12T17:37:44.313823874Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:37:44.313990 containerd[1525]: time="2025-12-12T17:37:44.313971186Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:37:44.314284 containerd[1525]: time="2025-12-12T17:37:44.314228341Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:37:44.314342 containerd[1525]: time="2025-12-12T17:37:44.314297575Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:37:44.314342 containerd[1525]: time="2025-12-12T17:37:44.314309288Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:37:44.314377 containerd[1525]: time="2025-12-12T17:37:44.314340667Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:37:44.314578 containerd[1525]: time="2025-12-12T17:37:44.314541271Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:37:44.314644 containerd[1525]: time="2025-12-12T17:37:44.314622800Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:37:44.318088 containerd[1525]: time="2025-12-12T17:37:44.318052698Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:37:44.318151 containerd[1525]: time="2025-12-12T17:37:44.318109520Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:37:44.318151 containerd[1525]: time="2025-12-12T17:37:44.318126121Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:37:44.318151 containerd[1525]: time="2025-12-12T17:37:44.318138067Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:37:44.318217 containerd[1525]: time="2025-12-12T17:37:44.318160447Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:37:44.318217 containerd[1525]: time="2025-12-12T17:37:44.318173247Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:37:44.318217 containerd[1525]: time="2025-12-12T17:37:44.318187481Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:37:44.318217 containerd[1525]: time="2025-12-12T17:37:44.318198458Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:37:44.318217 containerd[1525]: time="2025-12-12T17:37:44.318208969Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:37:44.318318 containerd[1525]: time="2025-12-12T17:37:44.318218898Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:37:44.318318 containerd[1525]: time="2025-12-12T17:37:44.318236314Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:37:44.318318 containerd[1525]: time="2025-12-12T17:37:44.318270717Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:37:44.318491 containerd[1525]: time="2025-12-12T17:37:44.318383703Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:37:44.318491 containerd[1525]: time="2025-12-12T17:37:44.318415197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:37:44.318491 containerd[1525]: time="2025-12-12T17:37:44.318437694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:37:44.318491 containerd[1525]: time="2025-12-12T17:37:44.318450765Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:37:44.318491 containerd[1525]: time="2025-12-12T17:37:44.318461315Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:37:44.318491 containerd[1525]: time="2025-12-12T17:37:44.318470972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:37:44.318491 containerd[1525]: time="2025-12-12T17:37:44.318481290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:37:44.318491 containerd[1525]: time="2025-12-12T17:37:44.318491103Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:37:44.318736 containerd[1525]: time="2025-12-12T17:37:44.318503010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:37:44.318736 containerd[1525]: time="2025-12-12T17:37:44.318513405Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:37:44.318736 containerd[1525]: time="2025-12-12T17:37:44.318524731Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:37:44.318786 containerd[1525]: time="2025-12-12T17:37:44.318738134Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:37:44.318786 containerd[1525]: time="2025-12-12T17:37:44.318758885Z" level=info msg="Start snapshots syncer" Dec 12 17:37:44.318826 containerd[1525]: time="2025-12-12T17:37:44.318785919Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:37:44.319328 containerd[1525]: time="2025-12-12T17:37:44.319284909Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:37:44.319453 containerd[1525]: time="2025-12-12T17:37:44.319341188Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:37:44.319453 containerd[1525]: time="2025-12-12T17:37:44.319408289Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:37:44.319644 containerd[1525]: time="2025-12-12T17:37:44.319622818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:37:44.319700 containerd[1525]: time="2025-12-12T17:37:44.319653692Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:37:44.319700 containerd[1525]: time="2025-12-12T17:37:44.319668120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:37:44.319700 containerd[1525]: time="2025-12-12T17:37:44.319678825Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:37:44.319700 containerd[1525]: time="2025-12-12T17:37:44.319690461Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:37:44.319700 containerd[1525]: time="2025-12-12T17:37:44.319700158Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:37:44.319778 containerd[1525]: time="2025-12-12T17:37:44.319711484Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:37:44.319778 containerd[1525]: time="2025-12-12T17:37:44.319737587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:37:44.319778 containerd[1525]: time="2025-12-12T17:37:44.319747633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:37:44.319778 containerd[1525]: time="2025-12-12T17:37:44.319762449Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:37:44.319850 containerd[1525]: time="2025-12-12T17:37:44.319801003Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:37:44.319850 containerd[1525]: time="2025-12-12T17:37:44.319816673Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:37:44.319850 containerd[1525]: time="2025-12-12T17:37:44.319824741Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:37:44.319850 containerd[1525]: time="2025-12-12T17:37:44.319834398Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:37:44.319850 containerd[1525]: time="2025-12-12T17:37:44.319841496Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:37:44.319850 containerd[1525]: time="2025-12-12T17:37:44.319850185Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:37:44.319943 containerd[1525]: time="2025-12-12T17:37:44.319860618Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:37:44.319943 containerd[1525]: time="2025-12-12T17:37:44.319938308Z" level=info msg="runtime interface created" Dec 12 17:37:44.319975 containerd[1525]: time="2025-12-12T17:37:44.319943350Z" level=info msg="created NRI interface" Dec 12 17:37:44.319975 containerd[1525]: time="2025-12-12T17:37:44.319958942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:37:44.319975 containerd[1525]: time="2025-12-12T17:37:44.319971237Z" level=info msg="Connect containerd service" Dec 12 17:37:44.320023 containerd[1525]: time="2025-12-12T17:37:44.319992919Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:37:44.320808 containerd[1525]: time="2025-12-12T17:37:44.320772180Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:37:44.393282 containerd[1525]: time="2025-12-12T17:37:44.391904875Z" level=info msg="Start subscribing containerd event" Dec 12 17:37:44.393282 containerd[1525]: time="2025-12-12T17:37:44.391995713Z" level=info msg="Start recovering state" Dec 12 17:37:44.393282 containerd[1525]: time="2025-12-12T17:37:44.392088452Z" level=info msg="Start event monitor" Dec 12 17:37:44.393282 containerd[1525]: time="2025-12-12T17:37:44.392108194Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:37:44.393282 containerd[1525]: time="2025-12-12T17:37:44.392116960Z" level=info msg="Start streaming server" Dec 12 17:37:44.393282 containerd[1525]: time="2025-12-12T17:37:44.392134220Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:37:44.393282 containerd[1525]: time="2025-12-12T17:37:44.392142404Z" level=info msg="runtime interface starting up..." Dec 12 17:37:44.393282 containerd[1525]: time="2025-12-12T17:37:44.392147602Z" level=info msg="starting plugins..." Dec 12 17:37:44.393282 containerd[1525]: time="2025-12-12T17:37:44.392161604Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:37:44.393282 containerd[1525]: time="2025-12-12T17:37:44.392372719Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:37:44.393282 containerd[1525]: time="2025-12-12T17:37:44.392420892Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:37:44.393282 containerd[1525]: time="2025-12-12T17:37:44.392508162Z" level=info msg="containerd successfully booted in 0.095598s" Dec 12 17:37:44.392613 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:37:44.445908 tar[1523]: linux-arm64/README.md Dec 12 17:37:44.464376 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:37:44.502927 sshd_keygen[1521]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:37:44.522186 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:37:44.526991 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:37:44.542716 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:37:44.542919 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:37:44.547433 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:37:44.568319 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:37:44.570776 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:37:44.572959 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:37:44.574173 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:37:45.693421 systemd-networkd[1436]: eth0: Gained IPv6LL Dec 12 17:37:45.696687 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:37:45.698238 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:37:45.702532 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 12 17:37:45.704958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:37:45.707125 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:37:45.733088 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:37:45.735029 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 12 17:37:45.735286 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 12 17:37:45.738245 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:37:46.239649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:37:46.241170 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:37:46.245804 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:37:46.246362 systemd[1]: Startup finished in 2.066s (kernel) + 4.731s (initrd) + 3.962s (userspace) = 10.760s. Dec 12 17:37:46.574320 kubelet[1638]: E1212 17:37:46.574191 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:37:46.576418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:37:46.576555 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:37:46.576894 systemd[1]: kubelet.service: Consumed 688ms CPU time, 248.7M memory peak. Dec 12 17:37:50.697800 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:37:50.699685 systemd[1]: Started sshd@0-10.0.0.93:22-10.0.0.1:34148.service - OpenSSH per-connection server daemon (10.0.0.1:34148). Dec 12 17:37:50.778320 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 34148 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:37:50.780416 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:50.787942 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:37:50.790752 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:37:50.802083 systemd-logind[1509]: New session 1 of user core. Dec 12 17:37:50.814300 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:37:50.816883 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:37:50.835544 (systemd)[1656]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:37:50.838905 systemd-logind[1509]: New session c1 of user core. Dec 12 17:37:50.956015 systemd[1656]: Queued start job for default target default.target. Dec 12 17:37:50.977202 systemd[1656]: Created slice app.slice - User Application Slice. Dec 12 17:37:50.977239 systemd[1656]: Reached target paths.target - Paths. Dec 12 17:37:50.977311 systemd[1656]: Reached target timers.target - Timers. Dec 12 17:37:50.978526 systemd[1656]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:37:50.988118 systemd[1656]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:37:50.988176 systemd[1656]: Reached target sockets.target - Sockets. Dec 12 17:37:50.988210 systemd[1656]: Reached target basic.target - Basic System. Dec 12 17:37:50.988236 systemd[1656]: Reached target default.target - Main User Target. Dec 12 17:37:50.988282 systemd[1656]: Startup finished in 141ms. Dec 12 17:37:50.988424 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:37:50.990504 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:37:51.050741 systemd[1]: Started sshd@1-10.0.0.93:22-10.0.0.1:45460.service - OpenSSH per-connection server daemon (10.0.0.1:45460). Dec 12 17:37:51.092610 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 45460 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:37:51.094024 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:51.098657 systemd-logind[1509]: New session 2 of user core. Dec 12 17:37:51.104443 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:37:51.154068 sshd[1670]: Connection closed by 10.0.0.1 port 45460 Dec 12 17:37:51.154543 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:51.173103 systemd[1]: sshd@1-10.0.0.93:22-10.0.0.1:45460.service: Deactivated successfully. Dec 12 17:37:51.174738 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:37:51.175976 systemd-logind[1509]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:37:51.179500 systemd[1]: Started sshd@2-10.0.0.93:22-10.0.0.1:45470.service - OpenSSH per-connection server daemon (10.0.0.1:45470). Dec 12 17:37:51.180345 systemd-logind[1509]: Removed session 2. Dec 12 17:37:51.238942 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 45470 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:37:51.240067 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:51.248214 systemd-logind[1509]: New session 3 of user core. Dec 12 17:37:51.269421 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:37:51.316432 sshd[1679]: Connection closed by 10.0.0.1 port 45470 Dec 12 17:37:51.316875 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:51.332965 systemd[1]: sshd@2-10.0.0.93:22-10.0.0.1:45470.service: Deactivated successfully. Dec 12 17:37:51.339829 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:37:51.343835 systemd-logind[1509]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:37:51.346087 systemd[1]: Started sshd@3-10.0.0.93:22-10.0.0.1:45482.service - OpenSSH per-connection server daemon (10.0.0.1:45482). Dec 12 17:37:51.348459 systemd-logind[1509]: Removed session 3. Dec 12 17:37:51.400675 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 45482 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:37:51.402329 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:51.407891 systemd-logind[1509]: New session 4 of user core. Dec 12 17:37:51.417423 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:37:51.469299 sshd[1688]: Connection closed by 10.0.0.1 port 45482 Dec 12 17:37:51.469747 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:51.482323 systemd[1]: sshd@3-10.0.0.93:22-10.0.0.1:45482.service: Deactivated successfully. Dec 12 17:37:51.486780 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:37:51.493502 systemd-logind[1509]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:37:51.495607 systemd[1]: Started sshd@4-10.0.0.93:22-10.0.0.1:45492.service - OpenSSH per-connection server daemon (10.0.0.1:45492). Dec 12 17:37:51.497052 systemd-logind[1509]: Removed session 4. Dec 12 17:37:51.569381 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 45492 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:37:51.573513 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:51.577408 systemd-logind[1509]: New session 5 of user core. Dec 12 17:37:51.592445 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:37:51.649425 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:37:51.649708 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:37:51.664245 sudo[1698]: pam_unix(sudo:session): session closed for user root Dec 12 17:37:51.666310 sshd[1697]: Connection closed by 10.0.0.1 port 45492 Dec 12 17:37:51.666881 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:51.681467 systemd[1]: sshd@4-10.0.0.93:22-10.0.0.1:45492.service: Deactivated successfully. Dec 12 17:37:51.683477 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:37:51.684163 systemd-logind[1509]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:37:51.686291 systemd[1]: Started sshd@5-10.0.0.93:22-10.0.0.1:45498.service - OpenSSH per-connection server daemon (10.0.0.1:45498). Dec 12 17:37:51.687164 systemd-logind[1509]: Removed session 5. Dec 12 17:37:51.759637 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 45498 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:37:51.761747 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:51.766311 systemd-logind[1509]: New session 6 of user core. Dec 12 17:37:51.780411 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:37:51.831631 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:37:51.831901 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:37:51.906472 sudo[1709]: pam_unix(sudo:session): session closed for user root Dec 12 17:37:51.911726 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:37:51.911988 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:37:51.920563 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:37:51.959960 augenrules[1731]: No rules Dec 12 17:37:51.961015 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:37:51.961210 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:37:51.962389 sudo[1708]: pam_unix(sudo:session): session closed for user root Dec 12 17:37:51.963605 sshd[1707]: Connection closed by 10.0.0.1 port 45498 Dec 12 17:37:51.963898 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Dec 12 17:37:51.984370 systemd[1]: sshd@5-10.0.0.93:22-10.0.0.1:45498.service: Deactivated successfully. Dec 12 17:37:51.985826 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:37:51.987915 systemd-logind[1509]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:37:51.989757 systemd-logind[1509]: Removed session 6. Dec 12 17:37:51.991316 systemd[1]: Started sshd@6-10.0.0.93:22-10.0.0.1:45500.service - OpenSSH per-connection server daemon (10.0.0.1:45500). Dec 12 17:37:52.042854 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 45500 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:37:52.044494 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:37:52.049173 systemd-logind[1509]: New session 7 of user core. Dec 12 17:37:52.058422 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:37:52.108984 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:37:52.109254 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:37:52.384121 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:37:52.402622 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:37:52.593378 dockerd[1765]: time="2025-12-12T17:37:52.593311572Z" level=info msg="Starting up" Dec 12 17:37:52.594273 dockerd[1765]: time="2025-12-12T17:37:52.594230533Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:37:52.604494 dockerd[1765]: time="2025-12-12T17:37:52.604435837Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:37:52.634069 dockerd[1765]: time="2025-12-12T17:37:52.634022155Z" level=info msg="Loading containers: start." Dec 12 17:37:52.642288 kernel: Initializing XFRM netlink socket Dec 12 17:37:52.824915 systemd-networkd[1436]: docker0: Link UP Dec 12 17:37:52.828689 dockerd[1765]: time="2025-12-12T17:37:52.828576218Z" level=info msg="Loading containers: done." Dec 12 17:37:52.842521 dockerd[1765]: time="2025-12-12T17:37:52.842468565Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:37:52.842647 dockerd[1765]: time="2025-12-12T17:37:52.842553550Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:37:52.842647 dockerd[1765]: time="2025-12-12T17:37:52.842637189Z" level=info msg="Initializing buildkit" Dec 12 17:37:52.868482 dockerd[1765]: time="2025-12-12T17:37:52.868446967Z" level=info msg="Completed buildkit initialization" Dec 12 17:37:52.876225 dockerd[1765]: time="2025-12-12T17:37:52.876171363Z" level=info msg="Daemon has completed initialization" Dec 12 17:37:52.876396 dockerd[1765]: time="2025-12-12T17:37:52.876274517Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:37:52.876393 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:37:53.287117 containerd[1525]: time="2025-12-12T17:37:53.286801934Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 12 17:37:53.859796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537178623.mount: Deactivated successfully. Dec 12 17:37:54.616617 containerd[1525]: time="2025-12-12T17:37:54.615665699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:54.617075 containerd[1525]: time="2025-12-12T17:37:54.617047543Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571042" Dec 12 17:37:54.618000 containerd[1525]: time="2025-12-12T17:37:54.617967899Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:54.620541 containerd[1525]: time="2025-12-12T17:37:54.620505762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:54.622185 containerd[1525]: time="2025-12-12T17:37:54.622146840Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.335304905s" Dec 12 17:37:54.622236 containerd[1525]: time="2025-12-12T17:37:54.622184339Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 12 17:37:54.622686 containerd[1525]: time="2025-12-12T17:37:54.622666857Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 12 17:37:55.607371 containerd[1525]: time="2025-12-12T17:37:55.607301140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:55.608679 containerd[1525]: time="2025-12-12T17:37:55.608632895Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135479" Dec 12 17:37:55.610218 containerd[1525]: time="2025-12-12T17:37:55.610175090Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:55.612588 containerd[1525]: time="2025-12-12T17:37:55.612545342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:55.614026 containerd[1525]: time="2025-12-12T17:37:55.613647710Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 990.947426ms" Dec 12 17:37:55.614026 containerd[1525]: time="2025-12-12T17:37:55.613736526Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 12 17:37:55.614432 containerd[1525]: time="2025-12-12T17:37:55.614406216Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 12 17:37:56.456215 containerd[1525]: time="2025-12-12T17:37:56.456155724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:56.456924 containerd[1525]: time="2025-12-12T17:37:56.456880508Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191718" Dec 12 17:37:56.458142 containerd[1525]: time="2025-12-12T17:37:56.458083075Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:56.461078 containerd[1525]: time="2025-12-12T17:37:56.461022962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:56.462525 containerd[1525]: time="2025-12-12T17:37:56.462345193Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 847.902743ms" Dec 12 17:37:56.462525 containerd[1525]: time="2025-12-12T17:37:56.462382643Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 12 17:37:56.462803 containerd[1525]: time="2025-12-12T17:37:56.462754835Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 12 17:37:56.826853 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:37:56.828381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:37:56.969024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:37:56.974374 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:37:57.036276 kubelet[2053]: E1212 17:37:57.036190 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:37:57.039404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:37:57.039530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:37:57.039867 systemd[1]: kubelet.service: Consumed 148ms CPU time, 106.9M memory peak. Dec 12 17:37:57.539957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702322173.mount: Deactivated successfully. Dec 12 17:37:57.713648 containerd[1525]: time="2025-12-12T17:37:57.713593994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:57.714404 containerd[1525]: time="2025-12-12T17:37:57.714375076Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805255" Dec 12 17:37:57.716154 containerd[1525]: time="2025-12-12T17:37:57.715870876Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:57.718205 containerd[1525]: time="2025-12-12T17:37:57.718165025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:57.718737 containerd[1525]: time="2025-12-12T17:37:57.718696607Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.25590956s" Dec 12 17:37:57.718737 containerd[1525]: time="2025-12-12T17:37:57.718734603Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 12 17:37:57.719207 containerd[1525]: time="2025-12-12T17:37:57.719180725Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 12 17:37:58.195449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3287159652.mount: Deactivated successfully. Dec 12 17:37:59.133562 containerd[1525]: time="2025-12-12T17:37:59.133488208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:59.134172 containerd[1525]: time="2025-12-12T17:37:59.134124832Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Dec 12 17:37:59.135349 containerd[1525]: time="2025-12-12T17:37:59.135318486Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:59.138148 containerd[1525]: time="2025-12-12T17:37:59.138098579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:59.140272 containerd[1525]: time="2025-12-12T17:37:59.140145606Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.420932046s" Dec 12 17:37:59.140272 containerd[1525]: time="2025-12-12T17:37:59.140184167Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 12 17:37:59.140790 containerd[1525]: time="2025-12-12T17:37:59.140756774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 12 17:37:59.641491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627841755.mount: Deactivated successfully. Dec 12 17:37:59.647634 containerd[1525]: time="2025-12-12T17:37:59.647116189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:59.649950 containerd[1525]: time="2025-12-12T17:37:59.649903931Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Dec 12 17:37:59.650688 containerd[1525]: time="2025-12-12T17:37:59.650649188Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:59.652872 containerd[1525]: time="2025-12-12T17:37:59.652826997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:37:59.653417 containerd[1525]: time="2025-12-12T17:37:59.653379924Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 512.588811ms" Dec 12 17:37:59.653467 containerd[1525]: time="2025-12-12T17:37:59.653415060Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 12 17:37:59.654017 containerd[1525]: time="2025-12-12T17:37:59.653981811Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 12 17:38:00.115483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374073243.mount: Deactivated successfully. Dec 12 17:38:02.274819 containerd[1525]: time="2025-12-12T17:38:02.274774757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:02.277514 containerd[1525]: time="2025-12-12T17:38:02.276163099Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062989" Dec 12 17:38:02.277604 containerd[1525]: time="2025-12-12T17:38:02.277419604Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:02.281070 containerd[1525]: time="2025-12-12T17:38:02.281014519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:02.282767 containerd[1525]: time="2025-12-12T17:38:02.282727529Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.628712764s" Dec 12 17:38:02.282767 containerd[1525]: time="2025-12-12T17:38:02.282763470Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 12 17:38:07.112907 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:38:07.114443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:38:07.279514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:07.293570 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:38:07.405147 kubelet[2211]: E1212 17:38:07.405019 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:38:07.407989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:38:07.408121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:38:07.408444 systemd[1]: kubelet.service: Consumed 141ms CPU time, 107.4M memory peak. Dec 12 17:38:08.777024 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:08.777176 systemd[1]: kubelet.service: Consumed 141ms CPU time, 107.4M memory peak. Dec 12 17:38:08.779084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:38:08.803875 systemd[1]: Reload requested from client PID 2228 ('systemctl') (unit session-7.scope)... Dec 12 17:38:08.803894 systemd[1]: Reloading... Dec 12 17:38:08.883300 zram_generator::config[2273]: No configuration found. Dec 12 17:38:09.078895 systemd[1]: Reloading finished in 274 ms. Dec 12 17:38:09.143950 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:38:09.144039 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:38:09.144310 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:09.144372 systemd[1]: kubelet.service: Consumed 93ms CPU time, 95.2M memory peak. Dec 12 17:38:09.145906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:38:09.293935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:09.307601 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:38:09.345318 kubelet[2315]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:38:09.345318 kubelet[2315]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:38:09.345318 kubelet[2315]: I1212 17:38:09.345070 2315 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:38:10.747708 kubelet[2315]: I1212 17:38:10.747651 2315 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:38:10.747708 kubelet[2315]: I1212 17:38:10.747686 2315 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:38:10.748851 kubelet[2315]: I1212 17:38:10.748819 2315 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:38:10.748851 kubelet[2315]: I1212 17:38:10.748839 2315 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:38:10.749107 kubelet[2315]: I1212 17:38:10.749077 2315 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:38:10.844489 kubelet[2315]: E1212 17:38:10.844454 2315 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:38:10.845377 kubelet[2315]: I1212 17:38:10.845355 2315 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:38:10.850002 kubelet[2315]: I1212 17:38:10.849096 2315 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:38:10.854721 kubelet[2315]: I1212 17:38:10.854692 2315 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:38:10.854935 kubelet[2315]: I1212 17:38:10.854909 2315 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:38:10.855099 kubelet[2315]: I1212 17:38:10.854935 2315 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:38:10.855192 kubelet[2315]: I1212 17:38:10.855103 2315 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:38:10.855192 kubelet[2315]: I1212 17:38:10.855111 2315 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:38:10.855240 kubelet[2315]: I1212 17:38:10.855228 2315 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:38:10.857876 kubelet[2315]: I1212 17:38:10.857858 2315 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:38:10.859131 kubelet[2315]: I1212 17:38:10.859112 2315 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:38:10.859181 kubelet[2315]: I1212 17:38:10.859139 2315 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:38:10.859249 kubelet[2315]: I1212 17:38:10.859166 2315 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:38:10.860451 kubelet[2315]: I1212 17:38:10.860345 2315 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:38:10.860540 kubelet[2315]: E1212 17:38:10.860508 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:38:10.860943 kubelet[2315]: E1212 17:38:10.860912 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:38:10.861829 kubelet[2315]: I1212 17:38:10.861629 2315 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:38:10.862268 kubelet[2315]: I1212 17:38:10.862218 2315 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:38:10.862268 kubelet[2315]: I1212 17:38:10.862249 2315 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:38:10.862359 kubelet[2315]: W1212 17:38:10.862299 2315 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:38:10.864681 kubelet[2315]: I1212 17:38:10.864650 2315 server.go:1262] "Started kubelet" Dec 12 17:38:10.864895 kubelet[2315]: I1212 17:38:10.864866 2315 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:38:10.865058 kubelet[2315]: I1212 17:38:10.865012 2315 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:38:10.865105 kubelet[2315]: I1212 17:38:10.865068 2315 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:38:10.865368 kubelet[2315]: I1212 17:38:10.865350 2315 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:38:10.866492 kubelet[2315]: I1212 17:38:10.866063 2315 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:38:10.867500 kubelet[2315]: I1212 17:38:10.866665 2315 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:38:10.868182 kubelet[2315]: I1212 17:38:10.866798 2315 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:38:10.868270 kubelet[2315]: I1212 17:38:10.868243 2315 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:38:10.868767 kubelet[2315]: I1212 17:38:10.868726 2315 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:38:10.868819 kubelet[2315]: I1212 17:38:10.868782 2315 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:38:10.869186 kubelet[2315]: E1212 17:38:10.869154 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:38:10.869514 kubelet[2315]: I1212 17:38:10.869490 2315 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:38:10.869780 kubelet[2315]: I1212 17:38:10.869587 2315 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:38:10.869780 kubelet[2315]: E1212 17:38:10.869776 2315 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:38:10.869976 kubelet[2315]: E1212 17:38:10.869869 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="200ms" Dec 12 17:38:10.871315 kubelet[2315]: E1212 17:38:10.870417 2315 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:38:10.871315 kubelet[2315]: I1212 17:38:10.870530 2315 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:38:10.871315 kubelet[2315]: E1212 17:38:10.869200 2315 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.93:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.93:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18808875323c5f30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:38:10.864602928 +0000 UTC m=+1.553945109,LastTimestamp:2025-12-12 17:38:10.864602928 +0000 UTC m=+1.553945109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:38:10.883229 kubelet[2315]: I1212 17:38:10.883203 2315 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:38:10.883229 kubelet[2315]: I1212 17:38:10.883222 2315 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:38:10.883229 kubelet[2315]: I1212 17:38:10.883240 2315 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:38:10.885007 kubelet[2315]: I1212 17:38:10.884923 2315 policy_none.go:49] "None policy: Start" Dec 12 17:38:10.885007 kubelet[2315]: I1212 17:38:10.884943 2315 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:38:10.885007 kubelet[2315]: I1212 17:38:10.884954 2315 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:38:10.886208 kubelet[2315]: I1212 17:38:10.886176 2315 policy_none.go:47] "Start" Dec 12 17:38:10.890740 kubelet[2315]: I1212 17:38:10.890681 2315 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:38:10.891478 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:38:10.891895 kubelet[2315]: I1212 17:38:10.891829 2315 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:38:10.891895 kubelet[2315]: I1212 17:38:10.891846 2315 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:38:10.891895 kubelet[2315]: I1212 17:38:10.891880 2315 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:38:10.891977 kubelet[2315]: E1212 17:38:10.891926 2315 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:38:10.894283 kubelet[2315]: E1212 17:38:10.894239 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:38:10.912321 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:38:10.916004 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:38:10.930245 kubelet[2315]: E1212 17:38:10.930213 2315 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:38:10.930476 kubelet[2315]: I1212 17:38:10.930457 2315 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:38:10.930535 kubelet[2315]: I1212 17:38:10.930477 2315 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:38:10.931117 kubelet[2315]: I1212 17:38:10.931088 2315 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:38:10.931746 kubelet[2315]: E1212 17:38:10.931666 2315 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:38:10.931746 kubelet[2315]: E1212 17:38:10.931718 2315 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 12 17:38:11.003377 systemd[1]: Created slice kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice - libcontainer container kubepods-burstable-pod07ca0cbf79ad6ba9473d8e9f7715e571.slice. Dec 12 17:38:11.032403 kubelet[2315]: I1212 17:38:11.032375 2315 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:38:11.032859 kubelet[2315]: E1212 17:38:11.032816 2315 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Dec 12 17:38:11.034836 kubelet[2315]: E1212 17:38:11.034640 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:11.037284 systemd[1]: Created slice kubepods-burstable-pode3f59101d0f1b1ded10f071ce92a7825.slice - libcontainer container kubepods-burstable-pode3f59101d0f1b1ded10f071ce92a7825.slice. Dec 12 17:38:11.046488 kubelet[2315]: E1212 17:38:11.046303 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:11.048844 systemd[1]: Created slice kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice - libcontainer container kubepods-burstable-pod5bbfee13ce9e07281eca876a0b8067f2.slice. Dec 12 17:38:11.050398 kubelet[2315]: E1212 17:38:11.050371 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:11.070989 kubelet[2315]: E1212 17:38:11.070934 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="400ms" Dec 12 17:38:11.170411 kubelet[2315]: I1212 17:38:11.170365 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:11.170465 kubelet[2315]: I1212 17:38:11.170408 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3f59101d0f1b1ded10f071ce92a7825-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3f59101d0f1b1ded10f071ce92a7825\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:11.170465 kubelet[2315]: I1212 17:38:11.170433 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:11.170465 kubelet[2315]: I1212 17:38:11.170448 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:11.170552 kubelet[2315]: I1212 17:38:11.170490 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:11.170552 kubelet[2315]: I1212 17:38:11.170506 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:11.170552 kubelet[2315]: I1212 17:38:11.170521 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:11.170552 kubelet[2315]: I1212 17:38:11.170536 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3f59101d0f1b1ded10f071ce92a7825-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3f59101d0f1b1ded10f071ce92a7825\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:11.170625 kubelet[2315]: I1212 17:38:11.170556 2315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3f59101d0f1b1ded10f071ce92a7825-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e3f59101d0f1b1ded10f071ce92a7825\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:11.234134 kubelet[2315]: I1212 17:38:11.234065 2315 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:38:11.234481 kubelet[2315]: E1212 17:38:11.234455 2315 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Dec 12 17:38:11.338630 containerd[1525]: time="2025-12-12T17:38:11.338422555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:11.350847 containerd[1525]: time="2025-12-12T17:38:11.350808375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e3f59101d0f1b1ded10f071ce92a7825,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:11.352765 containerd[1525]: time="2025-12-12T17:38:11.352739901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:11.471664 kubelet[2315]: E1212 17:38:11.471620 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="800ms" Dec 12 17:38:11.636637 kubelet[2315]: I1212 17:38:11.636376 2315 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:38:11.636739 kubelet[2315]: E1212 17:38:11.636709 2315 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Dec 12 17:38:11.769303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3059793983.mount: Deactivated successfully. Dec 12 17:38:11.777474 containerd[1525]: time="2025-12-12T17:38:11.777421201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:38:11.782067 containerd[1525]: time="2025-12-12T17:38:11.782020926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Dec 12 17:38:11.784285 containerd[1525]: time="2025-12-12T17:38:11.783816524Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:38:11.785538 containerd[1525]: time="2025-12-12T17:38:11.785501134Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:38:11.786383 containerd[1525]: time="2025-12-12T17:38:11.786358187Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:38:11.787794 containerd[1525]: time="2025-12-12T17:38:11.787759511Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:38:11.788716 containerd[1525]: time="2025-12-12T17:38:11.788688464Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:38:11.789577 containerd[1525]: time="2025-12-12T17:38:11.789532887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:38:11.791674 containerd[1525]: time="2025-12-12T17:38:11.791629837Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 436.1219ms" Dec 12 17:38:11.792278 containerd[1525]: time="2025-12-12T17:38:11.792214715Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 451.833456ms" Dec 12 17:38:11.799987 containerd[1525]: time="2025-12-12T17:38:11.799922235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 447.450393ms" Dec 12 17:38:11.818205 containerd[1525]: time="2025-12-12T17:38:11.818013267Z" level=info msg="connecting to shim e7e34ab48cd0d351978e18842ead1c97e6df2f958128c798ca7e5a63b24b4191" address="unix:///run/containerd/s/bd909a959dbfeec9cce1d798ed470a8f312ebb60612911ca2d1c48318425b2d0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:11.828598 containerd[1525]: time="2025-12-12T17:38:11.828553210Z" level=info msg="connecting to shim df726c8329545cc4c4ee35b65cf3faafafcaf221ce581e6cddba21adecd1aaa8" address="unix:///run/containerd/s/0beee45b8a1aa61a07183dff1779728cd50bdfe8c2498cb2e092b8b04c9efc84" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:11.835024 containerd[1525]: time="2025-12-12T17:38:11.834911404Z" level=info msg="connecting to shim 01dfbb7332a2e718b489294cee2bf61477f7c57b8cbe96f3101842e6eb436e1c" address="unix:///run/containerd/s/74de6a4a528b5bfc9e2e9c993fa0cd0f66575dcee0bee0d6835328f1ac20dd9c" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:11.855475 systemd[1]: Started cri-containerd-e7e34ab48cd0d351978e18842ead1c97e6df2f958128c798ca7e5a63b24b4191.scope - libcontainer container e7e34ab48cd0d351978e18842ead1c97e6df2f958128c798ca7e5a63b24b4191. Dec 12 17:38:11.859999 systemd[1]: Started cri-containerd-01dfbb7332a2e718b489294cee2bf61477f7c57b8cbe96f3101842e6eb436e1c.scope - libcontainer container 01dfbb7332a2e718b489294cee2bf61477f7c57b8cbe96f3101842e6eb436e1c. Dec 12 17:38:11.861226 systemd[1]: Started cri-containerd-df726c8329545cc4c4ee35b65cf3faafafcaf221ce581e6cddba21adecd1aaa8.scope - libcontainer container df726c8329545cc4c4ee35b65cf3faafafcaf221ce581e6cddba21adecd1aaa8. Dec 12 17:38:11.916609 kubelet[2315]: E1212 17:38:11.916456 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:38:11.922223 containerd[1525]: time="2025-12-12T17:38:11.921918571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e3f59101d0f1b1ded10f071ce92a7825,Namespace:kube-system,Attempt:0,} returns sandbox id \"01dfbb7332a2e718b489294cee2bf61477f7c57b8cbe96f3101842e6eb436e1c\"" Dec 12 17:38:11.923032 containerd[1525]: time="2025-12-12T17:38:11.922932135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:07ca0cbf79ad6ba9473d8e9f7715e571,Namespace:kube-system,Attempt:0,} returns sandbox id \"df726c8329545cc4c4ee35b65cf3faafafcaf221ce581e6cddba21adecd1aaa8\"" Dec 12 17:38:11.925830 containerd[1525]: time="2025-12-12T17:38:11.925698013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5bbfee13ce9e07281eca876a0b8067f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7e34ab48cd0d351978e18842ead1c97e6df2f958128c798ca7e5a63b24b4191\"" Dec 12 17:38:11.928070 containerd[1525]: time="2025-12-12T17:38:11.928030648Z" level=info msg="CreateContainer within sandbox \"01dfbb7332a2e718b489294cee2bf61477f7c57b8cbe96f3101842e6eb436e1c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:38:11.929907 containerd[1525]: time="2025-12-12T17:38:11.929872528Z" level=info msg="CreateContainer within sandbox \"df726c8329545cc4c4ee35b65cf3faafafcaf221ce581e6cddba21adecd1aaa8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:38:11.931515 containerd[1525]: time="2025-12-12T17:38:11.931484598Z" level=info msg="CreateContainer within sandbox \"e7e34ab48cd0d351978e18842ead1c97e6df2f958128c798ca7e5a63b24b4191\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:38:11.940313 containerd[1525]: time="2025-12-12T17:38:11.940235178Z" level=info msg="Container 9bb799bce95ee54108ca1e0c494f5e0f3fcaae6e34f10ac1c6d0a8e45eec8d8e: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:11.943822 containerd[1525]: time="2025-12-12T17:38:11.943732132Z" level=info msg="Container a4a447ed1cf7bf9216cb9102fe9ad2976dcac300d8386d2f3fa6ec65e4b3155c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:11.950068 containerd[1525]: time="2025-12-12T17:38:11.949998601Z" level=info msg="CreateContainer within sandbox \"01dfbb7332a2e718b489294cee2bf61477f7c57b8cbe96f3101842e6eb436e1c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9bb799bce95ee54108ca1e0c494f5e0f3fcaae6e34f10ac1c6d0a8e45eec8d8e\"" Dec 12 17:38:11.951020 containerd[1525]: time="2025-12-12T17:38:11.950991622Z" level=info msg="StartContainer for \"9bb799bce95ee54108ca1e0c494f5e0f3fcaae6e34f10ac1c6d0a8e45eec8d8e\"" Dec 12 17:38:11.951194 containerd[1525]: time="2025-12-12T17:38:11.951167157Z" level=info msg="Container f16164c56d0be2ecf8944fb7cd8503a8fc0ed03b4f7463a772fd2c22a82d2e17: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:11.952224 containerd[1525]: time="2025-12-12T17:38:11.952173847Z" level=info msg="connecting to shim 9bb799bce95ee54108ca1e0c494f5e0f3fcaae6e34f10ac1c6d0a8e45eec8d8e" address="unix:///run/containerd/s/74de6a4a528b5bfc9e2e9c993fa0cd0f66575dcee0bee0d6835328f1ac20dd9c" protocol=ttrpc version=3 Dec 12 17:38:11.956240 containerd[1525]: time="2025-12-12T17:38:11.956185696Z" level=info msg="CreateContainer within sandbox \"df726c8329545cc4c4ee35b65cf3faafafcaf221ce581e6cddba21adecd1aaa8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a4a447ed1cf7bf9216cb9102fe9ad2976dcac300d8386d2f3fa6ec65e4b3155c\"" Dec 12 17:38:11.956788 containerd[1525]: time="2025-12-12T17:38:11.956746713Z" level=info msg="StartContainer for \"a4a447ed1cf7bf9216cb9102fe9ad2976dcac300d8386d2f3fa6ec65e4b3155c\"" Dec 12 17:38:11.958086 containerd[1525]: time="2025-12-12T17:38:11.958053555Z" level=info msg="connecting to shim a4a447ed1cf7bf9216cb9102fe9ad2976dcac300d8386d2f3fa6ec65e4b3155c" address="unix:///run/containerd/s/0beee45b8a1aa61a07183dff1779728cd50bdfe8c2498cb2e092b8b04c9efc84" protocol=ttrpc version=3 Dec 12 17:38:11.960180 containerd[1525]: time="2025-12-12T17:38:11.960144150Z" level=info msg="CreateContainer within sandbox \"e7e34ab48cd0d351978e18842ead1c97e6df2f958128c798ca7e5a63b24b4191\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f16164c56d0be2ecf8944fb7cd8503a8fc0ed03b4f7463a772fd2c22a82d2e17\"" Dec 12 17:38:11.960720 containerd[1525]: time="2025-12-12T17:38:11.960677150Z" level=info msg="StartContainer for \"f16164c56d0be2ecf8944fb7cd8503a8fc0ed03b4f7463a772fd2c22a82d2e17\"" Dec 12 17:38:11.962102 containerd[1525]: time="2025-12-12T17:38:11.962072119Z" level=info msg="connecting to shim f16164c56d0be2ecf8944fb7cd8503a8fc0ed03b4f7463a772fd2c22a82d2e17" address="unix:///run/containerd/s/bd909a959dbfeec9cce1d798ed470a8f312ebb60612911ca2d1c48318425b2d0" protocol=ttrpc version=3 Dec 12 17:38:11.974517 systemd[1]: Started cri-containerd-9bb799bce95ee54108ca1e0c494f5e0f3fcaae6e34f10ac1c6d0a8e45eec8d8e.scope - libcontainer container 9bb799bce95ee54108ca1e0c494f5e0f3fcaae6e34f10ac1c6d0a8e45eec8d8e. Dec 12 17:38:11.981943 kubelet[2315]: E1212 17:38:11.981903 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:38:11.995505 systemd[1]: Started cri-containerd-f16164c56d0be2ecf8944fb7cd8503a8fc0ed03b4f7463a772fd2c22a82d2e17.scope - libcontainer container f16164c56d0be2ecf8944fb7cd8503a8fc0ed03b4f7463a772fd2c22a82d2e17. Dec 12 17:38:11.998793 systemd[1]: Started cri-containerd-a4a447ed1cf7bf9216cb9102fe9ad2976dcac300d8386d2f3fa6ec65e4b3155c.scope - libcontainer container a4a447ed1cf7bf9216cb9102fe9ad2976dcac300d8386d2f3fa6ec65e4b3155c. Dec 12 17:38:12.036247 containerd[1525]: time="2025-12-12T17:38:12.036126933Z" level=info msg="StartContainer for \"9bb799bce95ee54108ca1e0c494f5e0f3fcaae6e34f10ac1c6d0a8e45eec8d8e\" returns successfully" Dec 12 17:38:12.063556 containerd[1525]: time="2025-12-12T17:38:12.063512483Z" level=info msg="StartContainer for \"f16164c56d0be2ecf8944fb7cd8503a8fc0ed03b4f7463a772fd2c22a82d2e17\" returns successfully" Dec 12 17:38:12.071049 kubelet[2315]: E1212 17:38:12.070991 2315 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:38:12.079540 containerd[1525]: time="2025-12-12T17:38:12.079422637Z" level=info msg="StartContainer for \"a4a447ed1cf7bf9216cb9102fe9ad2976dcac300d8386d2f3fa6ec65e4b3155c\" returns successfully" Dec 12 17:38:12.438562 kubelet[2315]: I1212 17:38:12.438531 2315 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:38:12.905682 kubelet[2315]: E1212 17:38:12.905126 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:12.906089 kubelet[2315]: E1212 17:38:12.905591 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:12.909837 kubelet[2315]: E1212 17:38:12.909810 2315 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Dec 12 17:38:13.426969 kubelet[2315]: E1212 17:38:13.426825 2315 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 12 17:38:13.465577 kubelet[2315]: E1212 17:38:13.465411 2315 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18808875323c5f30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:38:10.864602928 +0000 UTC m=+1.553945109,LastTimestamp:2025-12-12 17:38:10.864602928 +0000 UTC m=+1.553945109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:38:13.501636 kubelet[2315]: I1212 17:38:13.501592 2315 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:38:13.501636 kubelet[2315]: E1212 17:38:13.501632 2315 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Dec 12 17:38:13.518989 kubelet[2315]: E1212 17:38:13.518878 2315 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.188088753294e1d1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-12-12 17:38:10.870403537 +0000 UTC m=+1.559745718,LastTimestamp:2025-12-12 17:38:10.870403537 +0000 UTC m=+1.559745718,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 12 17:38:13.576835 kubelet[2315]: I1212 17:38:13.576801 2315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:13.581350 kubelet[2315]: E1212 17:38:13.581314 2315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:13.581350 kubelet[2315]: I1212 17:38:13.581340 2315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:13.583904 kubelet[2315]: E1212 17:38:13.583727 2315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:13.583904 kubelet[2315]: I1212 17:38:13.583753 2315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:13.585216 kubelet[2315]: E1212 17:38:13.585187 2315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:13.861939 kubelet[2315]: I1212 17:38:13.861818 2315 apiserver.go:52] "Watching apiserver" Dec 12 17:38:13.869086 kubelet[2315]: I1212 17:38:13.869050 2315 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:38:13.910995 kubelet[2315]: I1212 17:38:13.910774 2315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:13.910995 kubelet[2315]: I1212 17:38:13.910889 2315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:13.912887 kubelet[2315]: E1212 17:38:13.912854 2315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:13.913245 kubelet[2315]: E1212 17:38:13.913016 2315 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:14.912564 kubelet[2315]: I1212 17:38:14.912538 2315 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:15.747184 systemd[1]: Reload requested from client PID 2605 ('systemctl') (unit session-7.scope)... Dec 12 17:38:15.747201 systemd[1]: Reloading... Dec 12 17:38:15.830371 zram_generator::config[2648]: No configuration found. Dec 12 17:38:16.033172 systemd[1]: Reloading finished in 285 ms. Dec 12 17:38:16.056037 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:38:16.067398 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:38:16.067641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:16.067701 systemd[1]: kubelet.service: Consumed 1.846s CPU time, 123.4M memory peak. Dec 12 17:38:16.070410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:38:16.231760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:38:16.235864 (kubelet)[2690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:38:16.288869 kubelet[2690]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:38:16.288869 kubelet[2690]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:38:16.288869 kubelet[2690]: I1212 17:38:16.288636 2690 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:38:16.299296 kubelet[2690]: I1212 17:38:16.297523 2690 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:38:16.299296 kubelet[2690]: I1212 17:38:16.297569 2690 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:38:16.299296 kubelet[2690]: I1212 17:38:16.297608 2690 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:38:16.299296 kubelet[2690]: I1212 17:38:16.297614 2690 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:38:16.299296 kubelet[2690]: I1212 17:38:16.297827 2690 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:38:16.299842 kubelet[2690]: I1212 17:38:16.299818 2690 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:38:16.302924 kubelet[2690]: I1212 17:38:16.302895 2690 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:38:16.307233 kubelet[2690]: I1212 17:38:16.307212 2690 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:38:16.310267 kubelet[2690]: I1212 17:38:16.310223 2690 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:38:16.310479 kubelet[2690]: I1212 17:38:16.310452 2690 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:38:16.310632 kubelet[2690]: I1212 17:38:16.310481 2690 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:38:16.310710 kubelet[2690]: I1212 17:38:16.310635 2690 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:38:16.310710 kubelet[2690]: I1212 17:38:16.310644 2690 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:38:16.310710 kubelet[2690]: I1212 17:38:16.310686 2690 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:38:16.311542 kubelet[2690]: I1212 17:38:16.311525 2690 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:38:16.311678 kubelet[2690]: I1212 17:38:16.311669 2690 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:38:16.311713 kubelet[2690]: I1212 17:38:16.311684 2690 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:38:16.311713 kubelet[2690]: I1212 17:38:16.311708 2690 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:38:16.311764 kubelet[2690]: I1212 17:38:16.311723 2690 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:38:16.312835 kubelet[2690]: I1212 17:38:16.312699 2690 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 12 17:38:16.317457 kubelet[2690]: I1212 17:38:16.317419 2690 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:38:16.317524 kubelet[2690]: I1212 17:38:16.317465 2690 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:38:16.320272 kubelet[2690]: I1212 17:38:16.320208 2690 server.go:1262] "Started kubelet" Dec 12 17:38:16.324276 kubelet[2690]: I1212 17:38:16.324203 2690 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:38:16.330720 kubelet[2690]: I1212 17:38:16.330459 2690 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:38:16.330720 kubelet[2690]: I1212 17:38:16.330607 2690 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:38:16.330720 kubelet[2690]: I1212 17:38:16.330677 2690 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:38:16.331251 kubelet[2690]: I1212 17:38:16.330731 2690 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:38:16.331251 kubelet[2690]: I1212 17:38:16.330874 2690 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:38:16.331251 kubelet[2690]: I1212 17:38:16.330911 2690 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:38:16.331251 kubelet[2690]: E1212 17:38:16.331236 2690 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 12 17:38:16.332399 kubelet[2690]: I1212 17:38:16.332333 2690 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:38:16.332575 kubelet[2690]: I1212 17:38:16.332558 2690 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:38:16.335274 kubelet[2690]: I1212 17:38:16.335232 2690 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:38:16.335375 kubelet[2690]: I1212 17:38:16.335355 2690 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:38:16.336007 kubelet[2690]: I1212 17:38:16.335983 2690 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:38:16.339249 kubelet[2690]: I1212 17:38:16.338429 2690 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:38:16.341821 kubelet[2690]: I1212 17:38:16.341628 2690 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:38:16.351206 kubelet[2690]: E1212 17:38:16.350218 2690 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:38:16.351682 kubelet[2690]: I1212 17:38:16.351639 2690 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:38:16.351682 kubelet[2690]: I1212 17:38:16.351667 2690 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:38:16.351778 kubelet[2690]: I1212 17:38:16.351691 2690 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:38:16.351778 kubelet[2690]: E1212 17:38:16.351738 2690 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:38:16.384975 kubelet[2690]: I1212 17:38:16.384946 2690 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:38:16.384975 kubelet[2690]: I1212 17:38:16.384966 2690 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:38:16.384975 kubelet[2690]: I1212 17:38:16.384989 2690 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:38:16.385190 kubelet[2690]: I1212 17:38:16.385142 2690 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:38:16.385190 kubelet[2690]: I1212 17:38:16.385153 2690 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:38:16.385190 kubelet[2690]: I1212 17:38:16.385170 2690 policy_none.go:49] "None policy: Start" Dec 12 17:38:16.385190 kubelet[2690]: I1212 17:38:16.385190 2690 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:38:16.385297 kubelet[2690]: I1212 17:38:16.385201 2690 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:38:16.386331 kubelet[2690]: I1212 17:38:16.386306 2690 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 12 17:38:16.386331 kubelet[2690]: I1212 17:38:16.386329 2690 policy_none.go:47] "Start" Dec 12 17:38:16.391038 kubelet[2690]: E1212 17:38:16.390851 2690 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:38:16.391038 kubelet[2690]: I1212 17:38:16.391045 2690 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:38:16.391207 kubelet[2690]: I1212 17:38:16.391057 2690 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:38:16.391444 kubelet[2690]: I1212 17:38:16.391351 2690 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:38:16.392396 kubelet[2690]: E1212 17:38:16.392369 2690 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:38:16.452847 kubelet[2690]: I1212 17:38:16.452795 2690 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:16.452847 kubelet[2690]: I1212 17:38:16.452817 2690 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:16.453078 kubelet[2690]: I1212 17:38:16.453047 2690 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:16.461247 kubelet[2690]: E1212 17:38:16.461192 2690 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:16.492598 kubelet[2690]: I1212 17:38:16.492567 2690 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Dec 12 17:38:16.500341 kubelet[2690]: I1212 17:38:16.499982 2690 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Dec 12 17:38:16.500341 kubelet[2690]: I1212 17:38:16.500092 2690 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Dec 12 17:38:16.533181 kubelet[2690]: I1212 17:38:16.533142 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3f59101d0f1b1ded10f071ce92a7825-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3f59101d0f1b1ded10f071ce92a7825\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:16.533462 kubelet[2690]: I1212 17:38:16.533441 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3f59101d0f1b1ded10f071ce92a7825-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e3f59101d0f1b1ded10f071ce92a7825\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:16.533545 kubelet[2690]: I1212 17:38:16.533528 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:16.533615 kubelet[2690]: I1212 17:38:16.533603 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:16.533732 kubelet[2690]: I1212 17:38:16.533695 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:16.533769 kubelet[2690]: I1212 17:38:16.533733 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07ca0cbf79ad6ba9473d8e9f7715e571-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"07ca0cbf79ad6ba9473d8e9f7715e571\") " pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:16.533769 kubelet[2690]: I1212 17:38:16.533756 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3f59101d0f1b1ded10f071ce92a7825-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e3f59101d0f1b1ded10f071ce92a7825\") " pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:16.533816 kubelet[2690]: I1212 17:38:16.533769 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:16.533816 kubelet[2690]: I1212 17:38:16.533787 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbfee13ce9e07281eca876a0b8067f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5bbfee13ce9e07281eca876a0b8067f2\") " pod="kube-system/kube-controller-manager-localhost" Dec 12 17:38:17.312406 kubelet[2690]: I1212 17:38:17.312361 2690 apiserver.go:52] "Watching apiserver" Dec 12 17:38:17.331085 kubelet[2690]: I1212 17:38:17.331041 2690 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:38:17.372714 kubelet[2690]: I1212 17:38:17.372654 2690 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:17.373157 kubelet[2690]: I1212 17:38:17.373136 2690 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:17.380366 kubelet[2690]: E1212 17:38:17.380302 2690 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 12 17:38:17.380366 kubelet[2690]: E1212 17:38:17.380326 2690 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 12 17:38:17.398483 kubelet[2690]: I1212 17:38:17.398423 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.398404372 podStartE2EDuration="3.398404372s" podCreationTimestamp="2025-12-12 17:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:38:17.390830324 +0000 UTC m=+1.151348925" watchObservedRunningTime="2025-12-12 17:38:17.398404372 +0000 UTC m=+1.158923013" Dec 12 17:38:17.399052 kubelet[2690]: I1212 17:38:17.398547 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.398542703 podStartE2EDuration="1.398542703s" podCreationTimestamp="2025-12-12 17:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:38:17.39850974 +0000 UTC m=+1.159028381" watchObservedRunningTime="2025-12-12 17:38:17.398542703 +0000 UTC m=+1.159061304" Dec 12 17:38:17.406916 kubelet[2690]: I1212 17:38:17.406857 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.406841485 podStartE2EDuration="1.406841485s" podCreationTimestamp="2025-12-12 17:38:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:38:17.406611948 +0000 UTC m=+1.167130549" watchObservedRunningTime="2025-12-12 17:38:17.406841485 +0000 UTC m=+1.167360127" Dec 12 17:38:21.035852 kubelet[2690]: I1212 17:38:21.035796 2690 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:38:21.036633 containerd[1525]: time="2025-12-12T17:38:21.036586799Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:38:21.037798 kubelet[2690]: I1212 17:38:21.036790 2690 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:38:22.217519 systemd[1]: Created slice kubepods-besteffort-pod7431985c_16c0_4544_81d9_ad6d7564e6af.slice - libcontainer container kubepods-besteffort-pod7431985c_16c0_4544_81d9_ad6d7564e6af.slice. Dec 12 17:38:22.265725 kubelet[2690]: I1212 17:38:22.265548 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7431985c-16c0-4544-81d9-ad6d7564e6af-kube-proxy\") pod \"kube-proxy-h6mph\" (UID: \"7431985c-16c0-4544-81d9-ad6d7564e6af\") " pod="kube-system/kube-proxy-h6mph" Dec 12 17:38:22.265725 kubelet[2690]: I1212 17:38:22.265641 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7431985c-16c0-4544-81d9-ad6d7564e6af-xtables-lock\") pod \"kube-proxy-h6mph\" (UID: \"7431985c-16c0-4544-81d9-ad6d7564e6af\") " pod="kube-system/kube-proxy-h6mph" Dec 12 17:38:22.265725 kubelet[2690]: I1212 17:38:22.265658 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7431985c-16c0-4544-81d9-ad6d7564e6af-lib-modules\") pod \"kube-proxy-h6mph\" (UID: \"7431985c-16c0-4544-81d9-ad6d7564e6af\") " pod="kube-system/kube-proxy-h6mph" Dec 12 17:38:22.265725 kubelet[2690]: I1212 17:38:22.265673 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbhqh\" (UniqueName: \"kubernetes.io/projected/7431985c-16c0-4544-81d9-ad6d7564e6af-kube-api-access-qbhqh\") pod \"kube-proxy-h6mph\" (UID: \"7431985c-16c0-4544-81d9-ad6d7564e6af\") " pod="kube-system/kube-proxy-h6mph" Dec 12 17:38:22.291939 systemd[1]: Created slice kubepods-besteffort-pod97ba0c6c_a7cf_4263_98de_a316932e0b8e.slice - libcontainer container kubepods-besteffort-pod97ba0c6c_a7cf_4263_98de_a316932e0b8e.slice. Dec 12 17:38:22.366766 kubelet[2690]: I1212 17:38:22.366699 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvd8r\" (UniqueName: \"kubernetes.io/projected/97ba0c6c-a7cf-4263-98de-a316932e0b8e-kube-api-access-lvd8r\") pod \"tigera-operator-65cdcdfd6d-mr8jn\" (UID: \"97ba0c6c-a7cf-4263-98de-a316932e0b8e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-mr8jn" Dec 12 17:38:22.366902 kubelet[2690]: I1212 17:38:22.366793 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/97ba0c6c-a7cf-4263-98de-a316932e0b8e-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-mr8jn\" (UID: \"97ba0c6c-a7cf-4263-98de-a316932e0b8e\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-mr8jn" Dec 12 17:38:22.530317 containerd[1525]: time="2025-12-12T17:38:22.529874419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6mph,Uid:7431985c-16c0-4544-81d9-ad6d7564e6af,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:22.547402 containerd[1525]: time="2025-12-12T17:38:22.547356570Z" level=info msg="connecting to shim 02b50288641df1ddc353a1f1a99459918270c822f7317826db948d2a2262e519" address="unix:///run/containerd/s/f811c009ede37cc88ae5e56ad7334c53cd560e7edcd7cd5776f92c4daef7299b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:22.571531 systemd[1]: Started cri-containerd-02b50288641df1ddc353a1f1a99459918270c822f7317826db948d2a2262e519.scope - libcontainer container 02b50288641df1ddc353a1f1a99459918270c822f7317826db948d2a2262e519. Dec 12 17:38:22.600434 containerd[1525]: time="2025-12-12T17:38:22.600376733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-mr8jn,Uid:97ba0c6c-a7cf-4263-98de-a316932e0b8e,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:38:22.600926 containerd[1525]: time="2025-12-12T17:38:22.600882482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h6mph,Uid:7431985c-16c0-4544-81d9-ad6d7564e6af,Namespace:kube-system,Attempt:0,} returns sandbox id \"02b50288641df1ddc353a1f1a99459918270c822f7317826db948d2a2262e519\"" Dec 12 17:38:22.608521 containerd[1525]: time="2025-12-12T17:38:22.608482192Z" level=info msg="CreateContainer within sandbox \"02b50288641df1ddc353a1f1a99459918270c822f7317826db948d2a2262e519\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:38:22.622453 containerd[1525]: time="2025-12-12T17:38:22.622413421Z" level=info msg="connecting to shim 9dfe0d3171497b4ab96681b37acb5d5f62b32bb1931f1402753066de4199223c" address="unix:///run/containerd/s/9d70dcc928d3571518d880b476498407b1259851410f40cbb8e52ffefb67f0d1" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:22.622992 containerd[1525]: time="2025-12-12T17:38:22.622956172Z" level=info msg="Container d05af7c1f835ee12cd5f1749d312648ed687bad4a8b9d6e75f8c42fb130b42ba: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:22.633365 containerd[1525]: time="2025-12-12T17:38:22.633320799Z" level=info msg="CreateContainer within sandbox \"02b50288641df1ddc353a1f1a99459918270c822f7317826db948d2a2262e519\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d05af7c1f835ee12cd5f1749d312648ed687bad4a8b9d6e75f8c42fb130b42ba\"" Dec 12 17:38:22.634050 containerd[1525]: time="2025-12-12T17:38:22.633979117Z" level=info msg="StartContainer for \"d05af7c1f835ee12cd5f1749d312648ed687bad4a8b9d6e75f8c42fb130b42ba\"" Dec 12 17:38:22.635723 containerd[1525]: time="2025-12-12T17:38:22.635642811Z" level=info msg="connecting to shim d05af7c1f835ee12cd5f1749d312648ed687bad4a8b9d6e75f8c42fb130b42ba" address="unix:///run/containerd/s/f811c009ede37cc88ae5e56ad7334c53cd560e7edcd7cd5776f92c4daef7299b" protocol=ttrpc version=3 Dec 12 17:38:22.648426 systemd[1]: Started cri-containerd-9dfe0d3171497b4ab96681b37acb5d5f62b32bb1931f1402753066de4199223c.scope - libcontainer container 9dfe0d3171497b4ab96681b37acb5d5f62b32bb1931f1402753066de4199223c. Dec 12 17:38:22.651437 systemd[1]: Started cri-containerd-d05af7c1f835ee12cd5f1749d312648ed687bad4a8b9d6e75f8c42fb130b42ba.scope - libcontainer container d05af7c1f835ee12cd5f1749d312648ed687bad4a8b9d6e75f8c42fb130b42ba. Dec 12 17:38:22.683780 containerd[1525]: time="2025-12-12T17:38:22.683729615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-mr8jn,Uid:97ba0c6c-a7cf-4263-98de-a316932e0b8e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9dfe0d3171497b4ab96681b37acb5d5f62b32bb1931f1402753066de4199223c\"" Dec 12 17:38:22.685609 containerd[1525]: time="2025-12-12T17:38:22.685576760Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:38:22.709082 containerd[1525]: time="2025-12-12T17:38:22.709035288Z" level=info msg="StartContainer for \"d05af7c1f835ee12cd5f1749d312648ed687bad4a8b9d6e75f8c42fb130b42ba\" returns successfully" Dec 12 17:38:24.025283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1126038038.mount: Deactivated successfully. Dec 12 17:38:24.338627 containerd[1525]: time="2025-12-12T17:38:24.338508123Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:24.339119 containerd[1525]: time="2025-12-12T17:38:24.339070831Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Dec 12 17:38:24.340390 containerd[1525]: time="2025-12-12T17:38:24.340361417Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:24.342755 containerd[1525]: time="2025-12-12T17:38:24.342722577Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:24.348317 containerd[1525]: time="2025-12-12T17:38:24.348250098Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.662641296s" Dec 12 17:38:24.348317 containerd[1525]: time="2025-12-12T17:38:24.348316861Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:38:24.352298 containerd[1525]: time="2025-12-12T17:38:24.352233700Z" level=info msg="CreateContainer within sandbox \"9dfe0d3171497b4ab96681b37acb5d5f62b32bb1931f1402753066de4199223c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:38:24.364402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4060972735.mount: Deactivated successfully. Dec 12 17:38:24.376951 containerd[1525]: time="2025-12-12T17:38:24.376897473Z" level=info msg="Container 062e62971bb76dcbeebaa328b9cbbc62b923f21b4a5e454e0562fce511ec31a1: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:24.383723 containerd[1525]: time="2025-12-12T17:38:24.383677217Z" level=info msg="CreateContainer within sandbox \"9dfe0d3171497b4ab96681b37acb5d5f62b32bb1931f1402753066de4199223c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"062e62971bb76dcbeebaa328b9cbbc62b923f21b4a5e454e0562fce511ec31a1\"" Dec 12 17:38:24.384430 containerd[1525]: time="2025-12-12T17:38:24.384401534Z" level=info msg="StartContainer for \"062e62971bb76dcbeebaa328b9cbbc62b923f21b4a5e454e0562fce511ec31a1\"" Dec 12 17:38:24.385407 containerd[1525]: time="2025-12-12T17:38:24.385242817Z" level=info msg="connecting to shim 062e62971bb76dcbeebaa328b9cbbc62b923f21b4a5e454e0562fce511ec31a1" address="unix:///run/containerd/s/9d70dcc928d3571518d880b476498407b1259851410f40cbb8e52ffefb67f0d1" protocol=ttrpc version=3 Dec 12 17:38:24.421605 systemd[1]: Started cri-containerd-062e62971bb76dcbeebaa328b9cbbc62b923f21b4a5e454e0562fce511ec31a1.scope - libcontainer container 062e62971bb76dcbeebaa328b9cbbc62b923f21b4a5e454e0562fce511ec31a1. Dec 12 17:38:24.470516 containerd[1525]: time="2025-12-12T17:38:24.470468746Z" level=info msg="StartContainer for \"062e62971bb76dcbeebaa328b9cbbc62b923f21b4a5e454e0562fce511ec31a1\" returns successfully" Dec 12 17:38:25.412339 kubelet[2690]: I1212 17:38:25.412107 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h6mph" podStartSLOduration=3.412084447 podStartE2EDuration="3.412084447s" podCreationTimestamp="2025-12-12 17:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:38:23.40472728 +0000 UTC m=+7.165245921" watchObservedRunningTime="2025-12-12 17:38:25.412084447 +0000 UTC m=+9.172603088" Dec 12 17:38:29.055460 kubelet[2690]: I1212 17:38:29.055381 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-mr8jn" podStartSLOduration=5.391261989 podStartE2EDuration="7.055366242s" podCreationTimestamp="2025-12-12 17:38:22 +0000 UTC" firstStartedPulling="2025-12-12 17:38:22.685140655 +0000 UTC m=+6.445659296" lastFinishedPulling="2025-12-12 17:38:24.349244948 +0000 UTC m=+8.109763549" observedRunningTime="2025-12-12 17:38:25.414331515 +0000 UTC m=+9.174850156" watchObservedRunningTime="2025-12-12 17:38:29.055366242 +0000 UTC m=+12.815884883" Dec 12 17:38:29.491439 update_engine[1513]: I20251212 17:38:29.491349 1513 update_attempter.cc:509] Updating boot flags... Dec 12 17:38:29.783597 sudo[1744]: pam_unix(sudo:session): session closed for user root Dec 12 17:38:29.789385 sshd[1743]: Connection closed by 10.0.0.1 port 45500 Dec 12 17:38:29.788456 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Dec 12 17:38:29.791502 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:38:29.791732 systemd[1]: session-7.scope: Consumed 8.445s CPU time, 220M memory peak. Dec 12 17:38:29.792751 systemd[1]: sshd@6-10.0.0.93:22-10.0.0.1:45500.service: Deactivated successfully. Dec 12 17:38:29.795243 systemd-logind[1509]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:38:29.797308 systemd-logind[1509]: Removed session 7. Dec 12 17:38:38.190987 systemd[1]: Created slice kubepods-besteffort-pod6b1742e1_c3c5_4912_9796_685b2d0ea31f.slice - libcontainer container kubepods-besteffort-pod6b1742e1_c3c5_4912_9796_685b2d0ea31f.slice. Dec 12 17:38:38.266483 kubelet[2690]: I1212 17:38:38.266384 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9jkz\" (UniqueName: \"kubernetes.io/projected/6b1742e1-c3c5-4912-9796-685b2d0ea31f-kube-api-access-z9jkz\") pod \"calico-typha-5fdb9df5d6-mfbp5\" (UID: \"6b1742e1-c3c5-4912-9796-685b2d0ea31f\") " pod="calico-system/calico-typha-5fdb9df5d6-mfbp5" Dec 12 17:38:38.266483 kubelet[2690]: I1212 17:38:38.266432 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b1742e1-c3c5-4912-9796-685b2d0ea31f-tigera-ca-bundle\") pod \"calico-typha-5fdb9df5d6-mfbp5\" (UID: \"6b1742e1-c3c5-4912-9796-685b2d0ea31f\") " pod="calico-system/calico-typha-5fdb9df5d6-mfbp5" Dec 12 17:38:38.266483 kubelet[2690]: I1212 17:38:38.266457 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6b1742e1-c3c5-4912-9796-685b2d0ea31f-typha-certs\") pod \"calico-typha-5fdb9df5d6-mfbp5\" (UID: \"6b1742e1-c3c5-4912-9796-685b2d0ea31f\") " pod="calico-system/calico-typha-5fdb9df5d6-mfbp5" Dec 12 17:38:38.396314 systemd[1]: Created slice kubepods-besteffort-pod6fe09166_8841_41e3_8d99_37143f29c1dd.slice - libcontainer container kubepods-besteffort-pod6fe09166_8841_41e3_8d99_37143f29c1dd.slice. Dec 12 17:38:38.469434 kubelet[2690]: I1212 17:38:38.469304 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6fe09166-8841-41e3-8d99-37143f29c1dd-node-certs\") pod \"calico-node-ph66z\" (UID: \"6fe09166-8841-41e3-8d99-37143f29c1dd\") " pod="calico-system/calico-node-ph66z" Dec 12 17:38:38.469434 kubelet[2690]: I1212 17:38:38.469347 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fe09166-8841-41e3-8d99-37143f29c1dd-tigera-ca-bundle\") pod \"calico-node-ph66z\" (UID: \"6fe09166-8841-41e3-8d99-37143f29c1dd\") " pod="calico-system/calico-node-ph66z" Dec 12 17:38:38.469434 kubelet[2690]: I1212 17:38:38.469365 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h4mk\" (UniqueName: \"kubernetes.io/projected/6fe09166-8841-41e3-8d99-37143f29c1dd-kube-api-access-5h4mk\") pod \"calico-node-ph66z\" (UID: \"6fe09166-8841-41e3-8d99-37143f29c1dd\") " pod="calico-system/calico-node-ph66z" Dec 12 17:38:38.469434 kubelet[2690]: I1212 17:38:38.469384 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6fe09166-8841-41e3-8d99-37143f29c1dd-flexvol-driver-host\") pod \"calico-node-ph66z\" (UID: \"6fe09166-8841-41e3-8d99-37143f29c1dd\") " pod="calico-system/calico-node-ph66z" Dec 12 17:38:38.469434 kubelet[2690]: I1212 17:38:38.469401 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fe09166-8841-41e3-8d99-37143f29c1dd-xtables-lock\") pod \"calico-node-ph66z\" (UID: \"6fe09166-8841-41e3-8d99-37143f29c1dd\") " pod="calico-system/calico-node-ph66z" Dec 12 17:38:38.469639 kubelet[2690]: I1212 17:38:38.469414 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6fe09166-8841-41e3-8d99-37143f29c1dd-cni-net-dir\") pod \"calico-node-ph66z\" (UID: \"6fe09166-8841-41e3-8d99-37143f29c1dd\") " pod="calico-system/calico-node-ph66z" Dec 12 17:38:38.469639 kubelet[2690]: I1212 17:38:38.469428 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fe09166-8841-41e3-8d99-37143f29c1dd-lib-modules\") pod \"calico-node-ph66z\" (UID: \"6fe09166-8841-41e3-8d99-37143f29c1dd\") " pod="calico-system/calico-node-ph66z" Dec 12 17:38:38.469639 kubelet[2690]: I1212 17:38:38.469442 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6fe09166-8841-41e3-8d99-37143f29c1dd-policysync\") pod \"calico-node-ph66z\" (UID: \"6fe09166-8841-41e3-8d99-37143f29c1dd\") " pod="calico-system/calico-node-ph66z" Dec 12 17:38:38.469639 kubelet[2690]: I1212 17:38:38.469459 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6fe09166-8841-41e3-8d99-37143f29c1dd-cni-bin-dir\") pod \"calico-node-ph66z\" (UID: \"6fe09166-8841-41e3-8d99-37143f29c1dd\") " pod="calico-system/calico-node-ph66z" Dec 12 17:38:38.469639 kubelet[2690]: I1212 17:38:38.469472 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6fe09166-8841-41e3-8d99-37143f29c1dd-cni-log-dir\") pod \"calico-node-ph66z\" (UID: \"6fe09166-8841-41e3-8d99-37143f29c1dd\") " pod="calico-system/calico-node-ph66z" Dec 12 17:38:38.469753 kubelet[2690]: I1212 17:38:38.469485 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6fe09166-8841-41e3-8d99-37143f29c1dd-var-run-calico\") pod \"calico-node-ph66z\" (UID: \"6fe09166-8841-41e3-8d99-37143f29c1dd\") " pod="calico-system/calico-node-ph66z" Dec 12 17:38:38.469753 kubelet[2690]: I1212 17:38:38.469500 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6fe09166-8841-41e3-8d99-37143f29c1dd-var-lib-calico\") pod \"calico-node-ph66z\" (UID: \"6fe09166-8841-41e3-8d99-37143f29c1dd\") " pod="calico-system/calico-node-ph66z" Dec 12 17:38:38.498918 containerd[1525]: time="2025-12-12T17:38:38.498849050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fdb9df5d6-mfbp5,Uid:6b1742e1-c3c5-4912-9796-685b2d0ea31f,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:38.541072 containerd[1525]: time="2025-12-12T17:38:38.541020847Z" level=info msg="connecting to shim fbbdcc8e25a9e2303ef5bfb64e574416dbef0de91ae5a5e408ff9be74e17aa86" address="unix:///run/containerd/s/6b67b49195ba4ee6944b408079b8ea72e92590d9bf2e63405b2c7462250a4af3" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:38.581231 kubelet[2690]: E1212 17:38:38.577885 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clxcs" podUID="41d383b7-79ce-4986-93ed-9df24d00cb6a" Dec 12 17:38:38.586285 kubelet[2690]: E1212 17:38:38.585228 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.586880 kubelet[2690]: W1212 17:38:38.586414 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.586880 kubelet[2690]: E1212 17:38:38.586455 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.588530 kubelet[2690]: E1212 17:38:38.587333 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.588752 kubelet[2690]: W1212 17:38:38.588678 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.588752 kubelet[2690]: E1212 17:38:38.588708 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.596469 systemd[1]: Started cri-containerd-fbbdcc8e25a9e2303ef5bfb64e574416dbef0de91ae5a5e408ff9be74e17aa86.scope - libcontainer container fbbdcc8e25a9e2303ef5bfb64e574416dbef0de91ae5a5e408ff9be74e17aa86. Dec 12 17:38:38.600311 kubelet[2690]: E1212 17:38:38.600285 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.600311 kubelet[2690]: W1212 17:38:38.600304 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.600415 kubelet[2690]: E1212 17:38:38.600322 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.670913 kubelet[2690]: E1212 17:38:38.670767 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.670913 kubelet[2690]: W1212 17:38:38.670791 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.670913 kubelet[2690]: E1212 17:38:38.670813 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.671329 kubelet[2690]: E1212 17:38:38.671313 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.671418 kubelet[2690]: W1212 17:38:38.671374 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.671491 kubelet[2690]: E1212 17:38:38.671479 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.671912 kubelet[2690]: E1212 17:38:38.671895 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.672133 kubelet[2690]: W1212 17:38:38.672017 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.672133 kubelet[2690]: E1212 17:38:38.672037 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.672383 kubelet[2690]: E1212 17:38:38.672370 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.672490 kubelet[2690]: W1212 17:38:38.672478 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.672647 kubelet[2690]: E1212 17:38:38.672537 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.672947 kubelet[2690]: E1212 17:38:38.672934 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.673038 kubelet[2690]: W1212 17:38:38.673026 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.673166 kubelet[2690]: E1212 17:38:38.673091 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.673445 kubelet[2690]: E1212 17:38:38.673433 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.673604 kubelet[2690]: W1212 17:38:38.673513 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.673604 kubelet[2690]: E1212 17:38:38.673529 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.673802 kubelet[2690]: E1212 17:38:38.673790 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.673946 kubelet[2690]: W1212 17:38:38.673852 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.673946 kubelet[2690]: E1212 17:38:38.673866 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.674161 kubelet[2690]: E1212 17:38:38.674150 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.674337 kubelet[2690]: W1212 17:38:38.674205 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.674337 kubelet[2690]: E1212 17:38:38.674220 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.674647 kubelet[2690]: E1212 17:38:38.674536 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.674647 kubelet[2690]: W1212 17:38:38.674550 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.674647 kubelet[2690]: E1212 17:38:38.674561 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.674915 kubelet[2690]: E1212 17:38:38.674900 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.674974 kubelet[2690]: W1212 17:38:38.674963 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.675023 kubelet[2690]: E1212 17:38:38.675013 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.675444 kubelet[2690]: E1212 17:38:38.675327 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.675444 kubelet[2690]: W1212 17:38:38.675343 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.675444 kubelet[2690]: E1212 17:38:38.675354 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.675700 kubelet[2690]: E1212 17:38:38.675674 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.675780 kubelet[2690]: W1212 17:38:38.675766 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.675836 kubelet[2690]: E1212 17:38:38.675824 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.676083 kubelet[2690]: E1212 17:38:38.676068 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.676285 kubelet[2690]: W1212 17:38:38.676149 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.676285 kubelet[2690]: E1212 17:38:38.676166 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.676431 kubelet[2690]: E1212 17:38:38.676419 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.676490 kubelet[2690]: W1212 17:38:38.676479 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.676541 kubelet[2690]: E1212 17:38:38.676529 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.677479 kubelet[2690]: E1212 17:38:38.677444 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.677957 kubelet[2690]: W1212 17:38:38.677671 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.677957 kubelet[2690]: E1212 17:38:38.677690 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.678109 kubelet[2690]: E1212 17:38:38.678094 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.678274 kubelet[2690]: W1212 17:38:38.678172 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.678274 kubelet[2690]: E1212 17:38:38.678190 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.680809 kubelet[2690]: E1212 17:38:38.680679 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.680809 kubelet[2690]: W1212 17:38:38.680697 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.680809 kubelet[2690]: E1212 17:38:38.680710 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.681138 kubelet[2690]: E1212 17:38:38.681121 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.681479 kubelet[2690]: W1212 17:38:38.681363 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.681479 kubelet[2690]: E1212 17:38:38.681386 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.682566 kubelet[2690]: E1212 17:38:38.682196 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.682566 kubelet[2690]: W1212 17:38:38.682213 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.682566 kubelet[2690]: E1212 17:38:38.682226 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.683176 kubelet[2690]: E1212 17:38:38.683053 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.683176 kubelet[2690]: W1212 17:38:38.683068 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.683176 kubelet[2690]: E1212 17:38:38.683080 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.684214 kubelet[2690]: E1212 17:38:38.684082 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.684214 kubelet[2690]: W1212 17:38:38.684096 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.684214 kubelet[2690]: E1212 17:38:38.684107 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.684214 kubelet[2690]: I1212 17:38:38.684129 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/41d383b7-79ce-4986-93ed-9df24d00cb6a-socket-dir\") pod \"csi-node-driver-clxcs\" (UID: \"41d383b7-79ce-4986-93ed-9df24d00cb6a\") " pod="calico-system/csi-node-driver-clxcs" Dec 12 17:38:38.684906 kubelet[2690]: E1212 17:38:38.684786 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.685041 kubelet[2690]: W1212 17:38:38.685017 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.685080 kubelet[2690]: E1212 17:38:38.685047 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.685110 kubelet[2690]: I1212 17:38:38.685077 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41d383b7-79ce-4986-93ed-9df24d00cb6a-kubelet-dir\") pod \"csi-node-driver-clxcs\" (UID: \"41d383b7-79ce-4986-93ed-9df24d00cb6a\") " pod="calico-system/csi-node-driver-clxcs" Dec 12 17:38:38.685301 kubelet[2690]: E1212 17:38:38.685282 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.685301 kubelet[2690]: W1212 17:38:38.685300 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.685367 kubelet[2690]: E1212 17:38:38.685312 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.685367 kubelet[2690]: I1212 17:38:38.685332 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/41d383b7-79ce-4986-93ed-9df24d00cb6a-registration-dir\") pod \"csi-node-driver-clxcs\" (UID: \"41d383b7-79ce-4986-93ed-9df24d00cb6a\") " pod="calico-system/csi-node-driver-clxcs" Dec 12 17:38:38.685542 kubelet[2690]: E1212 17:38:38.685524 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.685580 kubelet[2690]: W1212 17:38:38.685542 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.685580 kubelet[2690]: E1212 17:38:38.685554 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.685722 kubelet[2690]: E1212 17:38:38.685712 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.685765 kubelet[2690]: W1212 17:38:38.685722 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.685765 kubelet[2690]: E1212 17:38:38.685731 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.685959 kubelet[2690]: E1212 17:38:38.685946 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.685997 kubelet[2690]: W1212 17:38:38.685959 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.685997 kubelet[2690]: E1212 17:38:38.685969 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.686163 kubelet[2690]: E1212 17:38:38.686151 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.686195 kubelet[2690]: W1212 17:38:38.686164 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.686195 kubelet[2690]: E1212 17:38:38.686175 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.686414 kubelet[2690]: E1212 17:38:38.686400 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.686414 kubelet[2690]: W1212 17:38:38.686410 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.686624 kubelet[2690]: E1212 17:38:38.686420 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.686624 kubelet[2690]: I1212 17:38:38.686442 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/41d383b7-79ce-4986-93ed-9df24d00cb6a-varrun\") pod \"csi-node-driver-clxcs\" (UID: \"41d383b7-79ce-4986-93ed-9df24d00cb6a\") " pod="calico-system/csi-node-driver-clxcs" Dec 12 17:38:38.686762 kubelet[2690]: E1212 17:38:38.686732 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.686828 kubelet[2690]: W1212 17:38:38.686814 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.686896 kubelet[2690]: E1212 17:38:38.686885 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.687247 kubelet[2690]: E1212 17:38:38.687121 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.687247 kubelet[2690]: W1212 17:38:38.687134 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.687247 kubelet[2690]: E1212 17:38:38.687145 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.687447 kubelet[2690]: E1212 17:38:38.687432 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.687503 kubelet[2690]: W1212 17:38:38.687492 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.687570 kubelet[2690]: E1212 17:38:38.687558 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.687915 kubelet[2690]: E1212 17:38:38.687808 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.687915 kubelet[2690]: W1212 17:38:38.687821 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.687915 kubelet[2690]: E1212 17:38:38.687831 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.688071 kubelet[2690]: E1212 17:38:38.688057 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.688124 kubelet[2690]: W1212 17:38:38.688113 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.688184 kubelet[2690]: E1212 17:38:38.688172 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.688254 kubelet[2690]: I1212 17:38:38.688240 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssr6g\" (UniqueName: \"kubernetes.io/projected/41d383b7-79ce-4986-93ed-9df24d00cb6a-kube-api-access-ssr6g\") pod \"csi-node-driver-clxcs\" (UID: \"41d383b7-79ce-4986-93ed-9df24d00cb6a\") " pod="calico-system/csi-node-driver-clxcs" Dec 12 17:38:38.688698 kubelet[2690]: E1212 17:38:38.688674 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.688698 kubelet[2690]: W1212 17:38:38.688696 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.688801 kubelet[2690]: E1212 17:38:38.688782 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.688973 kubelet[2690]: E1212 17:38:38.688960 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.688973 kubelet[2690]: W1212 17:38:38.688972 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.689025 kubelet[2690]: E1212 17:38:38.688982 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.705203 containerd[1525]: time="2025-12-12T17:38:38.705146119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fdb9df5d6-mfbp5,Uid:6b1742e1-c3c5-4912-9796-685b2d0ea31f,Namespace:calico-system,Attempt:0,} returns sandbox id \"fbbdcc8e25a9e2303ef5bfb64e574416dbef0de91ae5a5e408ff9be74e17aa86\"" Dec 12 17:38:38.706532 containerd[1525]: time="2025-12-12T17:38:38.706489113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:38:38.723883 containerd[1525]: time="2025-12-12T17:38:38.723791715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ph66z,Uid:6fe09166-8841-41e3-8d99-37143f29c1dd,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:38.766946 containerd[1525]: time="2025-12-12T17:38:38.766900896Z" level=info msg="connecting to shim effcd48876b6ecc406fe959d645e82744b733341843f78287ad3360caf4ca120" address="unix:///run/containerd/s/282cd40d7905d89fffdca861f0d8c5f2fe530b73de300de9d909146ca4683191" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:38.789053 kubelet[2690]: E1212 17:38:38.789016 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.789159 kubelet[2690]: W1212 17:38:38.789069 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.789159 kubelet[2690]: E1212 17:38:38.789124 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.789490 kubelet[2690]: E1212 17:38:38.789331 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.789490 kubelet[2690]: W1212 17:38:38.789345 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.789490 kubelet[2690]: E1212 17:38:38.789355 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.789585 kubelet[2690]: E1212 17:38:38.789527 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.789585 kubelet[2690]: W1212 17:38:38.789535 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.789585 kubelet[2690]: E1212 17:38:38.789544 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.789764 kubelet[2690]: E1212 17:38:38.789717 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.789764 kubelet[2690]: W1212 17:38:38.789729 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.789764 kubelet[2690]: E1212 17:38:38.789746 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.789915 kubelet[2690]: E1212 17:38:38.789903 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.789915 kubelet[2690]: W1212 17:38:38.789914 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.789982 kubelet[2690]: E1212 17:38:38.789924 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.790135 kubelet[2690]: E1212 17:38:38.790123 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.790161 kubelet[2690]: W1212 17:38:38.790134 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.790161 kubelet[2690]: E1212 17:38:38.790144 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.790290 kubelet[2690]: E1212 17:38:38.790277 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.790290 kubelet[2690]: W1212 17:38:38.790288 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.790371 kubelet[2690]: E1212 17:38:38.790297 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.790436 kubelet[2690]: E1212 17:38:38.790422 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.790436 kubelet[2690]: W1212 17:38:38.790433 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.790483 kubelet[2690]: E1212 17:38:38.790441 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.790585 kubelet[2690]: E1212 17:38:38.790573 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.790585 kubelet[2690]: W1212 17:38:38.790583 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.790649 kubelet[2690]: E1212 17:38:38.790591 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.790762 kubelet[2690]: E1212 17:38:38.790746 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.790762 kubelet[2690]: W1212 17:38:38.790758 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.790810 kubelet[2690]: E1212 17:38:38.790769 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.790976 kubelet[2690]: E1212 17:38:38.790900 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.790976 kubelet[2690]: W1212 17:38:38.790910 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.790976 kubelet[2690]: E1212 17:38:38.790917 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.791085 kubelet[2690]: E1212 17:38:38.791072 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.791085 kubelet[2690]: W1212 17:38:38.791083 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.791195 kubelet[2690]: E1212 17:38:38.791090 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.791237 kubelet[2690]: E1212 17:38:38.791224 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.791237 kubelet[2690]: W1212 17:38:38.791234 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.791300 kubelet[2690]: E1212 17:38:38.791242 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.791398 kubelet[2690]: E1212 17:38:38.791384 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.791398 kubelet[2690]: W1212 17:38:38.791397 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.791613 kubelet[2690]: E1212 17:38:38.791406 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.791715 kubelet[2690]: E1212 17:38:38.791696 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.791787 kubelet[2690]: W1212 17:38:38.791773 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.791847 kubelet[2690]: E1212 17:38:38.791835 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.792203 kubelet[2690]: E1212 17:38:38.792040 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.792203 kubelet[2690]: W1212 17:38:38.792053 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.792203 kubelet[2690]: E1212 17:38:38.792064 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.792430 kubelet[2690]: E1212 17:38:38.792415 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.792493 kubelet[2690]: W1212 17:38:38.792480 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.792543 kubelet[2690]: E1212 17:38:38.792532 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.792880 kubelet[2690]: E1212 17:38:38.792770 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.792880 kubelet[2690]: W1212 17:38:38.792785 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.792880 kubelet[2690]: E1212 17:38:38.792796 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.793170 kubelet[2690]: E1212 17:38:38.793045 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.793170 kubelet[2690]: W1212 17:38:38.793061 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.793170 kubelet[2690]: E1212 17:38:38.793071 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.793450 systemd[1]: Started cri-containerd-effcd48876b6ecc406fe959d645e82744b733341843f78287ad3360caf4ca120.scope - libcontainer container effcd48876b6ecc406fe959d645e82744b733341843f78287ad3360caf4ca120. Dec 12 17:38:38.793686 kubelet[2690]: E1212 17:38:38.793668 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.793718 kubelet[2690]: W1212 17:38:38.793685 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.793718 kubelet[2690]: E1212 17:38:38.793699 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.794150 kubelet[2690]: E1212 17:38:38.793952 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.794150 kubelet[2690]: W1212 17:38:38.793966 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.794150 kubelet[2690]: E1212 17:38:38.793977 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.794248 kubelet[2690]: E1212 17:38:38.794197 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.794248 kubelet[2690]: W1212 17:38:38.794207 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.794248 kubelet[2690]: E1212 17:38:38.794216 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.794432 kubelet[2690]: E1212 17:38:38.794417 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.794432 kubelet[2690]: W1212 17:38:38.794430 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.794483 kubelet[2690]: E1212 17:38:38.794439 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.795302 kubelet[2690]: E1212 17:38:38.794772 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.795302 kubelet[2690]: W1212 17:38:38.794789 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.795302 kubelet[2690]: E1212 17:38:38.794821 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.795302 kubelet[2690]: E1212 17:38:38.795290 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.795302 kubelet[2690]: W1212 17:38:38.795302 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.795448 kubelet[2690]: E1212 17:38:38.795313 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.813650 kubelet[2690]: E1212 17:38:38.813619 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:38.813650 kubelet[2690]: W1212 17:38:38.813639 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:38.813650 kubelet[2690]: E1212 17:38:38.813658 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:38.916028 containerd[1525]: time="2025-12-12T17:38:38.915983384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ph66z,Uid:6fe09166-8841-41e3-8d99-37143f29c1dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"effcd48876b6ecc406fe959d645e82744b733341843f78287ad3360caf4ca120\"" Dec 12 17:38:39.681929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3184090668.mount: Deactivated successfully. Dec 12 17:38:40.046574 containerd[1525]: time="2025-12-12T17:38:40.046328245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:40.047633 containerd[1525]: time="2025-12-12T17:38:40.047603315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Dec 12 17:38:40.048911 containerd[1525]: time="2025-12-12T17:38:40.048886985Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:40.051021 containerd[1525]: time="2025-12-12T17:38:40.050963234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:40.051478 containerd[1525]: time="2025-12-12T17:38:40.051451085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.344931771s" Dec 12 17:38:40.051538 containerd[1525]: time="2025-12-12T17:38:40.051483846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:38:40.052774 containerd[1525]: time="2025-12-12T17:38:40.052548991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:38:40.066922 containerd[1525]: time="2025-12-12T17:38:40.066863447Z" level=info msg="CreateContainer within sandbox \"fbbdcc8e25a9e2303ef5bfb64e574416dbef0de91ae5a5e408ff9be74e17aa86\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:38:40.073270 containerd[1525]: time="2025-12-12T17:38:40.073223796Z" level=info msg="Container 1e6d4d8f596656f3cd42c740b01ac9b4bd3fcd25054ba13f450891a960f54501: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:40.081010 containerd[1525]: time="2025-12-12T17:38:40.080865535Z" level=info msg="CreateContainer within sandbox \"fbbdcc8e25a9e2303ef5bfb64e574416dbef0de91ae5a5e408ff9be74e17aa86\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1e6d4d8f596656f3cd42c740b01ac9b4bd3fcd25054ba13f450891a960f54501\"" Dec 12 17:38:40.081445 containerd[1525]: time="2025-12-12T17:38:40.081418188Z" level=info msg="StartContainer for \"1e6d4d8f596656f3cd42c740b01ac9b4bd3fcd25054ba13f450891a960f54501\"" Dec 12 17:38:40.083238 containerd[1525]: time="2025-12-12T17:38:40.083202310Z" level=info msg="connecting to shim 1e6d4d8f596656f3cd42c740b01ac9b4bd3fcd25054ba13f450891a960f54501" address="unix:///run/containerd/s/6b67b49195ba4ee6944b408079b8ea72e92590d9bf2e63405b2c7462250a4af3" protocol=ttrpc version=3 Dec 12 17:38:40.106484 systemd[1]: Started cri-containerd-1e6d4d8f596656f3cd42c740b01ac9b4bd3fcd25054ba13f450891a960f54501.scope - libcontainer container 1e6d4d8f596656f3cd42c740b01ac9b4bd3fcd25054ba13f450891a960f54501. Dec 12 17:38:40.152078 containerd[1525]: time="2025-12-12T17:38:40.152039205Z" level=info msg="StartContainer for \"1e6d4d8f596656f3cd42c740b01ac9b4bd3fcd25054ba13f450891a960f54501\" returns successfully" Dec 12 17:38:40.352779 kubelet[2690]: E1212 17:38:40.352633 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clxcs" podUID="41d383b7-79ce-4986-93ed-9df24d00cb6a" Dec 12 17:38:40.467171 kubelet[2690]: I1212 17:38:40.466894 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fdb9df5d6-mfbp5" podStartSLOduration=1.12073783 podStartE2EDuration="2.46685303s" podCreationTimestamp="2025-12-12 17:38:38 +0000 UTC" firstStartedPulling="2025-12-12 17:38:38.706253987 +0000 UTC m=+22.466772628" lastFinishedPulling="2025-12-12 17:38:40.052369227 +0000 UTC m=+23.812887828" observedRunningTime="2025-12-12 17:38:40.466711547 +0000 UTC m=+24.227230228" watchObservedRunningTime="2025-12-12 17:38:40.46685303 +0000 UTC m=+24.227371751" Dec 12 17:38:40.495550 kubelet[2690]: E1212 17:38:40.495512 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.495550 kubelet[2690]: W1212 17:38:40.495538 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.495550 kubelet[2690]: E1212 17:38:40.495559 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.495739 kubelet[2690]: E1212 17:38:40.495730 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.495739 kubelet[2690]: W1212 17:38:40.495738 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.495790 kubelet[2690]: E1212 17:38:40.495746 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.495891 kubelet[2690]: E1212 17:38:40.495881 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.495933 kubelet[2690]: W1212 17:38:40.495891 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.495933 kubelet[2690]: E1212 17:38:40.495899 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.496049 kubelet[2690]: E1212 17:38:40.496040 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.496049 kubelet[2690]: W1212 17:38:40.496049 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.496104 kubelet[2690]: E1212 17:38:40.496057 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.496199 kubelet[2690]: E1212 17:38:40.496189 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.496228 kubelet[2690]: W1212 17:38:40.496199 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.496228 kubelet[2690]: E1212 17:38:40.496206 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.496359 kubelet[2690]: E1212 17:38:40.496349 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.496390 kubelet[2690]: W1212 17:38:40.496361 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.496390 kubelet[2690]: E1212 17:38:40.496369 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.496492 kubelet[2690]: E1212 17:38:40.496483 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.496492 kubelet[2690]: W1212 17:38:40.496492 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.496553 kubelet[2690]: E1212 17:38:40.496501 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.496632 kubelet[2690]: E1212 17:38:40.496622 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.496632 kubelet[2690]: W1212 17:38:40.496631 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.496686 kubelet[2690]: E1212 17:38:40.496639 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.496770 kubelet[2690]: E1212 17:38:40.496760 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.496805 kubelet[2690]: W1212 17:38:40.496770 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.496805 kubelet[2690]: E1212 17:38:40.496788 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.496921 kubelet[2690]: E1212 17:38:40.496910 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.496921 kubelet[2690]: W1212 17:38:40.496920 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.496977 kubelet[2690]: E1212 17:38:40.496928 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.497066 kubelet[2690]: E1212 17:38:40.497055 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.497066 kubelet[2690]: W1212 17:38:40.497065 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.497121 kubelet[2690]: E1212 17:38:40.497074 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.497224 kubelet[2690]: E1212 17:38:40.497214 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.497264 kubelet[2690]: W1212 17:38:40.497224 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.497264 kubelet[2690]: E1212 17:38:40.497232 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.497390 kubelet[2690]: E1212 17:38:40.497379 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.497390 kubelet[2690]: W1212 17:38:40.497390 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.497463 kubelet[2690]: E1212 17:38:40.497398 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.497538 kubelet[2690]: E1212 17:38:40.497529 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.497538 kubelet[2690]: W1212 17:38:40.497538 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.497598 kubelet[2690]: E1212 17:38:40.497546 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.497700 kubelet[2690]: E1212 17:38:40.497682 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.497700 kubelet[2690]: W1212 17:38:40.497699 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.497748 kubelet[2690]: E1212 17:38:40.497707 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.503195 kubelet[2690]: E1212 17:38:40.503106 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.503195 kubelet[2690]: W1212 17:38:40.503125 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.503195 kubelet[2690]: E1212 17:38:40.503141 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.503399 kubelet[2690]: E1212 17:38:40.503304 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.503399 kubelet[2690]: W1212 17:38:40.503314 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.503399 kubelet[2690]: E1212 17:38:40.503323 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.503493 kubelet[2690]: E1212 17:38:40.503480 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.503493 kubelet[2690]: W1212 17:38:40.503488 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.503564 kubelet[2690]: E1212 17:38:40.503496 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.503764 kubelet[2690]: E1212 17:38:40.503726 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.503764 kubelet[2690]: W1212 17:38:40.503744 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.503764 kubelet[2690]: E1212 17:38:40.503755 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.503941 kubelet[2690]: E1212 17:38:40.503903 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.503941 kubelet[2690]: W1212 17:38:40.503914 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.503941 kubelet[2690]: E1212 17:38:40.503921 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.504064 kubelet[2690]: E1212 17:38:40.504053 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.504064 kubelet[2690]: W1212 17:38:40.504061 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.504112 kubelet[2690]: E1212 17:38:40.504069 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.504231 kubelet[2690]: E1212 17:38:40.504218 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.504231 kubelet[2690]: W1212 17:38:40.504226 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.504360 kubelet[2690]: E1212 17:38:40.504233 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.504449 kubelet[2690]: E1212 17:38:40.504434 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.504474 kubelet[2690]: W1212 17:38:40.504449 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.504474 kubelet[2690]: E1212 17:38:40.504460 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.504629 kubelet[2690]: E1212 17:38:40.504601 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.504629 kubelet[2690]: W1212 17:38:40.504611 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.504629 kubelet[2690]: E1212 17:38:40.504619 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.504795 kubelet[2690]: E1212 17:38:40.504780 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.504795 kubelet[2690]: W1212 17:38:40.504792 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.504888 kubelet[2690]: E1212 17:38:40.504803 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.504942 kubelet[2690]: E1212 17:38:40.504929 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.504942 kubelet[2690]: W1212 17:38:40.504939 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.505001 kubelet[2690]: E1212 17:38:40.504947 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.505102 kubelet[2690]: E1212 17:38:40.505090 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.505102 kubelet[2690]: W1212 17:38:40.505100 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.505169 kubelet[2690]: E1212 17:38:40.505108 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.505280 kubelet[2690]: E1212 17:38:40.505268 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.505280 kubelet[2690]: W1212 17:38:40.505278 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.505337 kubelet[2690]: E1212 17:38:40.505286 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.505560 kubelet[2690]: E1212 17:38:40.505543 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.505595 kubelet[2690]: W1212 17:38:40.505560 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.505595 kubelet[2690]: E1212 17:38:40.505574 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.505736 kubelet[2690]: E1212 17:38:40.505726 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.505736 kubelet[2690]: W1212 17:38:40.505735 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.505807 kubelet[2690]: E1212 17:38:40.505744 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.505931 kubelet[2690]: E1212 17:38:40.505919 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.505931 kubelet[2690]: W1212 17:38:40.505930 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.506117 kubelet[2690]: E1212 17:38:40.505939 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.506205 kubelet[2690]: E1212 17:38:40.506189 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.506268 kubelet[2690]: W1212 17:38:40.506244 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.506342 kubelet[2690]: E1212 17:38:40.506329 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.506580 kubelet[2690]: E1212 17:38:40.506534 2690 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:38:40.506580 kubelet[2690]: W1212 17:38:40.506547 2690 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:38:40.506580 kubelet[2690]: E1212 17:38:40.506558 2690 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:38:40.946701 containerd[1525]: time="2025-12-12T17:38:40.945815105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:40.946701 containerd[1525]: time="2025-12-12T17:38:40.946549923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Dec 12 17:38:40.947346 containerd[1525]: time="2025-12-12T17:38:40.947317821Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:40.949568 containerd[1525]: time="2025-12-12T17:38:40.949517672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:40.950285 containerd[1525]: time="2025-12-12T17:38:40.950082886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 897.499333ms" Dec 12 17:38:40.950285 containerd[1525]: time="2025-12-12T17:38:40.950117286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:38:40.954124 containerd[1525]: time="2025-12-12T17:38:40.954087260Z" level=info msg="CreateContainer within sandbox \"effcd48876b6ecc406fe959d645e82744b733341843f78287ad3360caf4ca120\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:38:40.967287 containerd[1525]: time="2025-12-12T17:38:40.966441989Z" level=info msg="Container 644048f419a21171faf50d8647f5145096572e1f1642b9eb53c4610832649148: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:40.981012 containerd[1525]: time="2025-12-12T17:38:40.980966410Z" level=info msg="CreateContainer within sandbox \"effcd48876b6ecc406fe959d645e82744b733341843f78287ad3360caf4ca120\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"644048f419a21171faf50d8647f5145096572e1f1642b9eb53c4610832649148\"" Dec 12 17:38:40.981899 containerd[1525]: time="2025-12-12T17:38:40.981869751Z" level=info msg="StartContainer for \"644048f419a21171faf50d8647f5145096572e1f1642b9eb53c4610832649148\"" Dec 12 17:38:40.984199 containerd[1525]: time="2025-12-12T17:38:40.984169165Z" level=info msg="connecting to shim 644048f419a21171faf50d8647f5145096572e1f1642b9eb53c4610832649148" address="unix:///run/containerd/s/282cd40d7905d89fffdca861f0d8c5f2fe530b73de300de9d909146ca4683191" protocol=ttrpc version=3 Dec 12 17:38:41.009458 systemd[1]: Started cri-containerd-644048f419a21171faf50d8647f5145096572e1f1642b9eb53c4610832649148.scope - libcontainer container 644048f419a21171faf50d8647f5145096572e1f1642b9eb53c4610832649148. Dec 12 17:38:41.066723 containerd[1525]: time="2025-12-12T17:38:41.066685679Z" level=info msg="StartContainer for \"644048f419a21171faf50d8647f5145096572e1f1642b9eb53c4610832649148\" returns successfully" Dec 12 17:38:41.077233 systemd[1]: cri-containerd-644048f419a21171faf50d8647f5145096572e1f1642b9eb53c4610832649148.scope: Deactivated successfully. Dec 12 17:38:41.116532 containerd[1525]: time="2025-12-12T17:38:41.116462280Z" level=info msg="received container exit event container_id:\"644048f419a21171faf50d8647f5145096572e1f1642b9eb53c4610832649148\" id:\"644048f419a21171faf50d8647f5145096572e1f1642b9eb53c4610832649148\" pid:3401 exited_at:{seconds:1765561121 nanos:109845971}" Dec 12 17:38:41.163764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-644048f419a21171faf50d8647f5145096572e1f1642b9eb53c4610832649148-rootfs.mount: Deactivated successfully. Dec 12 17:38:41.460484 kubelet[2690]: I1212 17:38:41.460442 2690 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:38:41.464289 containerd[1525]: time="2025-12-12T17:38:41.463894382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:38:42.354843 kubelet[2690]: E1212 17:38:42.353556 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clxcs" podUID="41d383b7-79ce-4986-93ed-9df24d00cb6a" Dec 12 17:38:43.634374 containerd[1525]: time="2025-12-12T17:38:43.634329593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:43.635100 containerd[1525]: time="2025-12-12T17:38:43.635077129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Dec 12 17:38:43.636050 containerd[1525]: time="2025-12-12T17:38:43.636021468Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:43.640212 containerd[1525]: time="2025-12-12T17:38:43.640029312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.176086449s" Dec 12 17:38:43.640212 containerd[1525]: time="2025-12-12T17:38:43.640085113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:38:43.641280 containerd[1525]: time="2025-12-12T17:38:43.641054773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:43.646695 containerd[1525]: time="2025-12-12T17:38:43.646649689Z" level=info msg="CreateContainer within sandbox \"effcd48876b6ecc406fe959d645e82744b733341843f78287ad3360caf4ca120\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:38:43.656902 containerd[1525]: time="2025-12-12T17:38:43.656843821Z" level=info msg="Container 4885c185b8cc99f20eae02d95b3f60c7c39c0f6bf4f0b61f23ee4c8dd5a7ca11: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:43.664890 containerd[1525]: time="2025-12-12T17:38:43.664748786Z" level=info msg="CreateContainer within sandbox \"effcd48876b6ecc406fe959d645e82744b733341843f78287ad3360caf4ca120\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4885c185b8cc99f20eae02d95b3f60c7c39c0f6bf4f0b61f23ee4c8dd5a7ca11\"" Dec 12 17:38:43.665385 containerd[1525]: time="2025-12-12T17:38:43.665344478Z" level=info msg="StartContainer for \"4885c185b8cc99f20eae02d95b3f60c7c39c0f6bf4f0b61f23ee4c8dd5a7ca11\"" Dec 12 17:38:43.667992 containerd[1525]: time="2025-12-12T17:38:43.667712647Z" level=info msg="connecting to shim 4885c185b8cc99f20eae02d95b3f60c7c39c0f6bf4f0b61f23ee4c8dd5a7ca11" address="unix:///run/containerd/s/282cd40d7905d89fffdca861f0d8c5f2fe530b73de300de9d909146ca4683191" protocol=ttrpc version=3 Dec 12 17:38:43.692447 systemd[1]: Started cri-containerd-4885c185b8cc99f20eae02d95b3f60c7c39c0f6bf4f0b61f23ee4c8dd5a7ca11.scope - libcontainer container 4885c185b8cc99f20eae02d95b3f60c7c39c0f6bf4f0b61f23ee4c8dd5a7ca11. Dec 12 17:38:43.785209 containerd[1525]: time="2025-12-12T17:38:43.785117609Z" level=info msg="StartContainer for \"4885c185b8cc99f20eae02d95b3f60c7c39c0f6bf4f0b61f23ee4c8dd5a7ca11\" returns successfully" Dec 12 17:38:44.352946 kubelet[2690]: E1212 17:38:44.352885 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clxcs" podUID="41d383b7-79ce-4986-93ed-9df24d00cb6a" Dec 12 17:38:44.463123 systemd[1]: cri-containerd-4885c185b8cc99f20eae02d95b3f60c7c39c0f6bf4f0b61f23ee4c8dd5a7ca11.scope: Deactivated successfully. Dec 12 17:38:44.463682 systemd[1]: cri-containerd-4885c185b8cc99f20eae02d95b3f60c7c39c0f6bf4f0b61f23ee4c8dd5a7ca11.scope: Consumed 495ms CPU time, 177.3M memory peak, 2.6M read from disk, 165.9M written to disk. Dec 12 17:38:44.464601 containerd[1525]: time="2025-12-12T17:38:44.464561138Z" level=info msg="received container exit event container_id:\"4885c185b8cc99f20eae02d95b3f60c7c39c0f6bf4f0b61f23ee4c8dd5a7ca11\" id:\"4885c185b8cc99f20eae02d95b3f60c7c39c0f6bf4f0b61f23ee4c8dd5a7ca11\" pid:3462 exited_at:{seconds:1765561124 nanos:464353014}" Dec 12 17:38:44.485351 kubelet[2690]: I1212 17:38:44.485317 2690 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 12 17:38:44.496548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4885c185b8cc99f20eae02d95b3f60c7c39c0f6bf4f0b61f23ee4c8dd5a7ca11-rootfs.mount: Deactivated successfully. Dec 12 17:38:44.551422 systemd[1]: Created slice kubepods-burstable-pod865fadd1_ae3c_4183_8b3d_bbdec49a000f.slice - libcontainer container kubepods-burstable-pod865fadd1_ae3c_4183_8b3d_bbdec49a000f.slice. Dec 12 17:38:44.560961 systemd[1]: Created slice kubepods-besteffort-pod3b93b28f_eb51_435f_8d8f_9893e27d0902.slice - libcontainer container kubepods-besteffort-pod3b93b28f_eb51_435f_8d8f_9893e27d0902.slice. Dec 12 17:38:44.571372 systemd[1]: Created slice kubepods-burstable-podf5115889_44d5_4314_a9ff_4cb71017c01f.slice - libcontainer container kubepods-burstable-podf5115889_44d5_4314_a9ff_4cb71017c01f.slice. Dec 12 17:38:44.578582 systemd[1]: Created slice kubepods-besteffort-pod93c57314_3f67_4215_bf04_8f4e3f7c0b74.slice - libcontainer container kubepods-besteffort-pod93c57314_3f67_4215_bf04_8f4e3f7c0b74.slice. Dec 12 17:38:44.582927 systemd[1]: Created slice kubepods-besteffort-pod3830e22e_ffb4_4a0f_a01d_f92ac3a7b160.slice - libcontainer container kubepods-besteffort-pod3830e22e_ffb4_4a0f_a01d_f92ac3a7b160.slice. Dec 12 17:38:44.591461 systemd[1]: Created slice kubepods-besteffort-pod228dc218_a1ca_4ef4_8bc7_2ef72ef3f04f.slice - libcontainer container kubepods-besteffort-pod228dc218_a1ca_4ef4_8bc7_2ef72ef3f04f.slice. Dec 12 17:38:44.597183 systemd[1]: Created slice kubepods-besteffort-pod36c49735_9da0_46fd_8634_a3bd00152ea8.slice - libcontainer container kubepods-besteffort-pod36c49735_9da0_46fd_8634_a3bd00152ea8.slice. Dec 12 17:38:44.632691 kubelet[2690]: I1212 17:38:44.632600 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-798cw\" (UniqueName: \"kubernetes.io/projected/f5115889-44d5-4314-a9ff-4cb71017c01f-kube-api-access-798cw\") pod \"coredns-66bc5c9577-nj2j6\" (UID: \"f5115889-44d5-4314-a9ff-4cb71017c01f\") " pod="kube-system/coredns-66bc5c9577-nj2j6" Dec 12 17:38:44.632691 kubelet[2690]: I1212 17:38:44.632644 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsbj5\" (UniqueName: \"kubernetes.io/projected/3b93b28f-eb51-435f-8d8f-9893e27d0902-kube-api-access-wsbj5\") pod \"calico-apiserver-c89f84888-gmbxz\" (UID: \"3b93b28f-eb51-435f-8d8f-9893e27d0902\") " pod="calico-apiserver/calico-apiserver-c89f84888-gmbxz" Dec 12 17:38:44.632691 kubelet[2690]: I1212 17:38:44.632664 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3830e22e-ffb4-4a0f-a01d-f92ac3a7b160-whisker-ca-bundle\") pod \"whisker-7f566497d6-ld628\" (UID: \"3830e22e-ffb4-4a0f-a01d-f92ac3a7b160\") " pod="calico-system/whisker-7f566497d6-ld628" Dec 12 17:38:44.632951 kubelet[2690]: I1212 17:38:44.632775 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f-calico-apiserver-certs\") pod \"calico-apiserver-c89f84888-48bx8\" (UID: \"228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f\") " pod="calico-apiserver/calico-apiserver-c89f84888-48bx8" Dec 12 17:38:44.632951 kubelet[2690]: I1212 17:38:44.632812 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5115889-44d5-4314-a9ff-4cb71017c01f-config-volume\") pod \"coredns-66bc5c9577-nj2j6\" (UID: \"f5115889-44d5-4314-a9ff-4cb71017c01f\") " pod="kube-system/coredns-66bc5c9577-nj2j6" Dec 12 17:38:44.632951 kubelet[2690]: I1212 17:38:44.632893 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/36c49735-9da0-46fd-8634-a3bd00152ea8-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-f6g67\" (UID: \"36c49735-9da0-46fd-8634-a3bd00152ea8\") " pod="calico-system/goldmane-7c778bb748-f6g67" Dec 12 17:38:44.633035 kubelet[2690]: I1212 17:38:44.632957 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3b93b28f-eb51-435f-8d8f-9893e27d0902-calico-apiserver-certs\") pod \"calico-apiserver-c89f84888-gmbxz\" (UID: \"3b93b28f-eb51-435f-8d8f-9893e27d0902\") " pod="calico-apiserver/calico-apiserver-c89f84888-gmbxz" Dec 12 17:38:44.633035 kubelet[2690]: I1212 17:38:44.633013 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36c49735-9da0-46fd-8634-a3bd00152ea8-config\") pod \"goldmane-7c778bb748-f6g67\" (UID: \"36c49735-9da0-46fd-8634-a3bd00152ea8\") " pod="calico-system/goldmane-7c778bb748-f6g67" Dec 12 17:38:44.633035 kubelet[2690]: I1212 17:38:44.633031 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/865fadd1-ae3c-4183-8b3d-bbdec49a000f-config-volume\") pod \"coredns-66bc5c9577-zq448\" (UID: \"865fadd1-ae3c-4183-8b3d-bbdec49a000f\") " pod="kube-system/coredns-66bc5c9577-zq448" Dec 12 17:38:44.633310 kubelet[2690]: I1212 17:38:44.633049 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48hx9\" (UniqueName: \"kubernetes.io/projected/36c49735-9da0-46fd-8634-a3bd00152ea8-kube-api-access-48hx9\") pod \"goldmane-7c778bb748-f6g67\" (UID: \"36c49735-9da0-46fd-8634-a3bd00152ea8\") " pod="calico-system/goldmane-7c778bb748-f6g67" Dec 12 17:38:44.633310 kubelet[2690]: I1212 17:38:44.633079 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68ksp\" (UniqueName: \"kubernetes.io/projected/865fadd1-ae3c-4183-8b3d-bbdec49a000f-kube-api-access-68ksp\") pod \"coredns-66bc5c9577-zq448\" (UID: \"865fadd1-ae3c-4183-8b3d-bbdec49a000f\") " pod="kube-system/coredns-66bc5c9577-zq448" Dec 12 17:38:44.633310 kubelet[2690]: I1212 17:38:44.633109 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3830e22e-ffb4-4a0f-a01d-f92ac3a7b160-whisker-backend-key-pair\") pod \"whisker-7f566497d6-ld628\" (UID: \"3830e22e-ffb4-4a0f-a01d-f92ac3a7b160\") " pod="calico-system/whisker-7f566497d6-ld628" Dec 12 17:38:44.633310 kubelet[2690]: I1212 17:38:44.633158 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9ghq\" (UniqueName: \"kubernetes.io/projected/228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f-kube-api-access-q9ghq\") pod \"calico-apiserver-c89f84888-48bx8\" (UID: \"228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f\") " pod="calico-apiserver/calico-apiserver-c89f84888-48bx8" Dec 12 17:38:44.633310 kubelet[2690]: I1212 17:38:44.633185 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/36c49735-9da0-46fd-8634-a3bd00152ea8-goldmane-key-pair\") pod \"goldmane-7c778bb748-f6g67\" (UID: \"36c49735-9da0-46fd-8634-a3bd00152ea8\") " pod="calico-system/goldmane-7c778bb748-f6g67" Dec 12 17:38:44.633460 kubelet[2690]: I1212 17:38:44.633234 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2l84\" (UniqueName: \"kubernetes.io/projected/93c57314-3f67-4215-bf04-8f4e3f7c0b74-kube-api-access-k2l84\") pod \"calico-kube-controllers-7f84df79cc-drm9h\" (UID: \"93c57314-3f67-4215-bf04-8f4e3f7c0b74\") " pod="calico-system/calico-kube-controllers-7f84df79cc-drm9h" Dec 12 17:38:44.633460 kubelet[2690]: I1212 17:38:44.633285 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6hp9\" (UniqueName: \"kubernetes.io/projected/3830e22e-ffb4-4a0f-a01d-f92ac3a7b160-kube-api-access-j6hp9\") pod \"whisker-7f566497d6-ld628\" (UID: \"3830e22e-ffb4-4a0f-a01d-f92ac3a7b160\") " pod="calico-system/whisker-7f566497d6-ld628" Dec 12 17:38:44.633460 kubelet[2690]: I1212 17:38:44.633386 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93c57314-3f67-4215-bf04-8f4e3f7c0b74-tigera-ca-bundle\") pod \"calico-kube-controllers-7f84df79cc-drm9h\" (UID: \"93c57314-3f67-4215-bf04-8f4e3f7c0b74\") " pod="calico-system/calico-kube-controllers-7f84df79cc-drm9h" Dec 12 17:38:44.860122 containerd[1525]: time="2025-12-12T17:38:44.860067376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zq448,Uid:865fadd1-ae3c-4183-8b3d-bbdec49a000f,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:44.870035 containerd[1525]: time="2025-12-12T17:38:44.869980094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c89f84888-gmbxz,Uid:3b93b28f-eb51-435f-8d8f-9893e27d0902,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:38:44.888684 containerd[1525]: time="2025-12-12T17:38:44.888584627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nj2j6,Uid:f5115889-44d5-4314-a9ff-4cb71017c01f,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:44.892248 containerd[1525]: time="2025-12-12T17:38:44.892212459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f84df79cc-drm9h,Uid:93c57314-3f67-4215-bf04-8f4e3f7c0b74,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:44.893838 containerd[1525]: time="2025-12-12T17:38:44.893792971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f566497d6-ld628,Uid:3830e22e-ffb4-4a0f-a01d-f92ac3a7b160,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:44.897018 containerd[1525]: time="2025-12-12T17:38:44.896962234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c89f84888-48bx8,Uid:228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:38:44.903390 containerd[1525]: time="2025-12-12T17:38:44.903239800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f6g67,Uid:36c49735-9da0-46fd-8634-a3bd00152ea8,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:44.994434 containerd[1525]: time="2025-12-12T17:38:44.994379824Z" level=error msg="Failed to destroy network for sandbox \"8ab0e05a7f8a66ce57b78d314e3fc89e04b65c31162be2711c428d67bea84fa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:44.994833 containerd[1525]: time="2025-12-12T17:38:44.994774192Z" level=error msg="Failed to destroy network for sandbox \"75e076927c0471f8c9829c00df65d6b0ab73c476e1bfb25c6e88a257368d376a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:44.995027 containerd[1525]: time="2025-12-12T17:38:44.995004117Z" level=error msg="Failed to destroy network for sandbox \"73bfa9a1363c875cae40877dc370ad564ae36c25decedb1a3180a1077011deda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:44.996010 containerd[1525]: time="2025-12-12T17:38:44.995970456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f566497d6-ld628,Uid:3830e22e-ffb4-4a0f-a01d-f92ac3a7b160,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab0e05a7f8a66ce57b78d314e3fc89e04b65c31162be2711c428d67bea84fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:44.996965 containerd[1525]: time="2025-12-12T17:38:44.996883115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nj2j6,Uid:f5115889-44d5-4314-a9ff-4cb71017c01f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"75e076927c0471f8c9829c00df65d6b0ab73c476e1bfb25c6e88a257368d376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:44.997669 containerd[1525]: time="2025-12-12T17:38:44.997622129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c89f84888-gmbxz,Uid:3b93b28f-eb51-435f-8d8f-9893e27d0902,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bfa9a1363c875cae40877dc370ad564ae36c25decedb1a3180a1077011deda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.000878 kubelet[2690]: E1212 17:38:45.000603 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bfa9a1363c875cae40877dc370ad564ae36c25decedb1a3180a1077011deda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.000878 kubelet[2690]: E1212 17:38:45.000690 2690 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bfa9a1363c875cae40877dc370ad564ae36c25decedb1a3180a1077011deda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c89f84888-gmbxz" Dec 12 17:38:45.000878 kubelet[2690]: E1212 17:38:45.000710 2690 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bfa9a1363c875cae40877dc370ad564ae36c25decedb1a3180a1077011deda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c89f84888-gmbxz" Dec 12 17:38:45.001011 kubelet[2690]: E1212 17:38:45.000765 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c89f84888-gmbxz_calico-apiserver(3b93b28f-eb51-435f-8d8f-9893e27d0902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c89f84888-gmbxz_calico-apiserver(3b93b28f-eb51-435f-8d8f-9893e27d0902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73bfa9a1363c875cae40877dc370ad564ae36c25decedb1a3180a1077011deda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c89f84888-gmbxz" podUID="3b93b28f-eb51-435f-8d8f-9893e27d0902" Dec 12 17:38:45.001178 kubelet[2690]: E1212 17:38:45.001135 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab0e05a7f8a66ce57b78d314e3fc89e04b65c31162be2711c428d67bea84fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.001224 kubelet[2690]: E1212 17:38:45.001195 2690 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab0e05a7f8a66ce57b78d314e3fc89e04b65c31162be2711c428d67bea84fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f566497d6-ld628" Dec 12 17:38:45.001224 kubelet[2690]: E1212 17:38:45.001212 2690 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab0e05a7f8a66ce57b78d314e3fc89e04b65c31162be2711c428d67bea84fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f566497d6-ld628" Dec 12 17:38:45.001593 kubelet[2690]: E1212 17:38:45.001300 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f566497d6-ld628_calico-system(3830e22e-ffb4-4a0f-a01d-f92ac3a7b160)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f566497d6-ld628_calico-system(3830e22e-ffb4-4a0f-a01d-f92ac3a7b160)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ab0e05a7f8a66ce57b78d314e3fc89e04b65c31162be2711c428d67bea84fa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f566497d6-ld628" podUID="3830e22e-ffb4-4a0f-a01d-f92ac3a7b160" Dec 12 17:38:45.001657 containerd[1525]: time="2025-12-12T17:38:45.001448886Z" level=error msg="Failed to destroy network for sandbox \"81387c3c05ae804c434452a5857715f08df372269738b1dd968bce6365a626c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.002099 kubelet[2690]: E1212 17:38:45.001926 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75e076927c0471f8c9829c00df65d6b0ab73c476e1bfb25c6e88a257368d376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.002099 kubelet[2690]: E1212 17:38:45.001981 2690 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75e076927c0471f8c9829c00df65d6b0ab73c476e1bfb25c6e88a257368d376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nj2j6" Dec 12 17:38:45.002099 kubelet[2690]: E1212 17:38:45.001996 2690 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75e076927c0471f8c9829c00df65d6b0ab73c476e1bfb25c6e88a257368d376a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-nj2j6" Dec 12 17:38:45.002222 kubelet[2690]: E1212 17:38:45.002058 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-nj2j6_kube-system(f5115889-44d5-4314-a9ff-4cb71017c01f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-nj2j6_kube-system(f5115889-44d5-4314-a9ff-4cb71017c01f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75e076927c0471f8c9829c00df65d6b0ab73c476e1bfb25c6e88a257368d376a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-nj2j6" podUID="f5115889-44d5-4314-a9ff-4cb71017c01f" Dec 12 17:38:45.002400 containerd[1525]: time="2025-12-12T17:38:45.002370784Z" level=error msg="Failed to destroy network for sandbox \"383cf4b18d86c24c679134e9aa9dbba3924d813295403b5f2f5c6c2ca95b6e0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.003685 containerd[1525]: time="2025-12-12T17:38:45.003641248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f84df79cc-drm9h,Uid:93c57314-3f67-4215-bf04-8f4e3f7c0b74,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"81387c3c05ae804c434452a5857715f08df372269738b1dd968bce6365a626c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.004245 kubelet[2690]: E1212 17:38:45.003951 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81387c3c05ae804c434452a5857715f08df372269738b1dd968bce6365a626c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.004245 kubelet[2690]: E1212 17:38:45.004006 2690 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81387c3c05ae804c434452a5857715f08df372269738b1dd968bce6365a626c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f84df79cc-drm9h" Dec 12 17:38:45.004245 kubelet[2690]: E1212 17:38:45.004035 2690 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81387c3c05ae804c434452a5857715f08df372269738b1dd968bce6365a626c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f84df79cc-drm9h" Dec 12 17:38:45.004429 kubelet[2690]: E1212 17:38:45.004092 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f84df79cc-drm9h_calico-system(93c57314-3f67-4215-bf04-8f4e3f7c0b74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f84df79cc-drm9h_calico-system(93c57314-3f67-4215-bf04-8f4e3f7c0b74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81387c3c05ae804c434452a5857715f08df372269738b1dd968bce6365a626c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f84df79cc-drm9h" podUID="93c57314-3f67-4215-bf04-8f4e3f7c0b74" Dec 12 17:38:45.005240 containerd[1525]: time="2025-12-12T17:38:45.005104436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zq448,Uid:865fadd1-ae3c-4183-8b3d-bbdec49a000f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"383cf4b18d86c24c679134e9aa9dbba3924d813295403b5f2f5c6c2ca95b6e0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.005575 kubelet[2690]: E1212 17:38:45.005544 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383cf4b18d86c24c679134e9aa9dbba3924d813295403b5f2f5c6c2ca95b6e0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.005626 kubelet[2690]: E1212 17:38:45.005582 2690 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383cf4b18d86c24c679134e9aa9dbba3924d813295403b5f2f5c6c2ca95b6e0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zq448" Dec 12 17:38:45.005626 kubelet[2690]: E1212 17:38:45.005600 2690 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383cf4b18d86c24c679134e9aa9dbba3924d813295403b5f2f5c6c2ca95b6e0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zq448" Dec 12 17:38:45.005761 kubelet[2690]: E1212 17:38:45.005725 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zq448_kube-system(865fadd1-ae3c-4183-8b3d-bbdec49a000f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zq448_kube-system(865fadd1-ae3c-4183-8b3d-bbdec49a000f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"383cf4b18d86c24c679134e9aa9dbba3924d813295403b5f2f5c6c2ca95b6e0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zq448" podUID="865fadd1-ae3c-4183-8b3d-bbdec49a000f" Dec 12 17:38:45.020428 containerd[1525]: time="2025-12-12T17:38:45.020383331Z" level=error msg="Failed to destroy network for sandbox \"9ebed67bea621029f10bdf12e1f7d8b635e999b8a3f3812d469e7db7dd80ccef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.020859 containerd[1525]: time="2025-12-12T17:38:45.020820539Z" level=error msg="Failed to destroy network for sandbox \"c2ddc2d19677c6e2755a7ac18be115d94787c9dc0a0a22a09589659687bd151e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.021476 containerd[1525]: time="2025-12-12T17:38:45.021445431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c89f84888-48bx8,Uid:228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ebed67bea621029f10bdf12e1f7d8b635e999b8a3f3812d469e7db7dd80ccef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.021759 kubelet[2690]: E1212 17:38:45.021724 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ebed67bea621029f10bdf12e1f7d8b635e999b8a3f3812d469e7db7dd80ccef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.021812 kubelet[2690]: E1212 17:38:45.021779 2690 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ebed67bea621029f10bdf12e1f7d8b635e999b8a3f3812d469e7db7dd80ccef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c89f84888-48bx8" Dec 12 17:38:45.021835 kubelet[2690]: E1212 17:38:45.021800 2690 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ebed67bea621029f10bdf12e1f7d8b635e999b8a3f3812d469e7db7dd80ccef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c89f84888-48bx8" Dec 12 17:38:45.021902 kubelet[2690]: E1212 17:38:45.021879 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c89f84888-48bx8_calico-apiserver(228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c89f84888-48bx8_calico-apiserver(228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ebed67bea621029f10bdf12e1f7d8b635e999b8a3f3812d469e7db7dd80ccef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c89f84888-48bx8" podUID="228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f" Dec 12 17:38:45.022441 containerd[1525]: time="2025-12-12T17:38:45.022388770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f6g67,Uid:36c49735-9da0-46fd-8634-a3bd00152ea8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2ddc2d19677c6e2755a7ac18be115d94787c9dc0a0a22a09589659687bd151e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.022814 kubelet[2690]: E1212 17:38:45.022789 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2ddc2d19677c6e2755a7ac18be115d94787c9dc0a0a22a09589659687bd151e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:45.022865 kubelet[2690]: E1212 17:38:45.022825 2690 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2ddc2d19677c6e2755a7ac18be115d94787c9dc0a0a22a09589659687bd151e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-f6g67" Dec 12 17:38:45.022865 kubelet[2690]: E1212 17:38:45.022846 2690 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2ddc2d19677c6e2755a7ac18be115d94787c9dc0a0a22a09589659687bd151e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-f6g67" Dec 12 17:38:45.022912 kubelet[2690]: E1212 17:38:45.022890 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-f6g67_calico-system(36c49735-9da0-46fd-8634-a3bd00152ea8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-f6g67_calico-system(36c49735-9da0-46fd-8634-a3bd00152ea8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2ddc2d19677c6e2755a7ac18be115d94787c9dc0a0a22a09589659687bd151e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-f6g67" podUID="36c49735-9da0-46fd-8634-a3bd00152ea8" Dec 12 17:38:45.480384 containerd[1525]: time="2025-12-12T17:38:45.480131159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:38:46.368987 systemd[1]: Created slice kubepods-besteffort-pod41d383b7_79ce_4986_93ed_9df24d00cb6a.slice - libcontainer container kubepods-besteffort-pod41d383b7_79ce_4986_93ed_9df24d00cb6a.slice. Dec 12 17:38:46.410609 containerd[1525]: time="2025-12-12T17:38:46.410572866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clxcs,Uid:41d383b7-79ce-4986-93ed-9df24d00cb6a,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:46.526427 containerd[1525]: time="2025-12-12T17:38:46.526381100Z" level=error msg="Failed to destroy network for sandbox \"cb2ed776c8dbb3e6dd4c8c61f4baf79dfb6365e39fe234bd137dcd4a243cfaf0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:46.528375 containerd[1525]: time="2025-12-12T17:38:46.528338057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clxcs,Uid:41d383b7-79ce-4986-93ed-9df24d00cb6a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb2ed776c8dbb3e6dd4c8c61f4baf79dfb6365e39fe234bd137dcd4a243cfaf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:46.528650 kubelet[2690]: E1212 17:38:46.528608 2690 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb2ed776c8dbb3e6dd4c8c61f4baf79dfb6365e39fe234bd137dcd4a243cfaf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:38:46.529188 kubelet[2690]: E1212 17:38:46.528998 2690 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb2ed776c8dbb3e6dd4c8c61f4baf79dfb6365e39fe234bd137dcd4a243cfaf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-clxcs" Dec 12 17:38:46.529188 kubelet[2690]: E1212 17:38:46.529029 2690 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb2ed776c8dbb3e6dd4c8c61f4baf79dfb6365e39fe234bd137dcd4a243cfaf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-clxcs" Dec 12 17:38:46.529188 kubelet[2690]: E1212 17:38:46.529105 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-clxcs_calico-system(41d383b7-79ce-4986-93ed-9df24d00cb6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-clxcs_calico-system(41d383b7-79ce-4986-93ed-9df24d00cb6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb2ed776c8dbb3e6dd4c8c61f4baf79dfb6365e39fe234bd137dcd4a243cfaf0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-clxcs" podUID="41d383b7-79ce-4986-93ed-9df24d00cb6a" Dec 12 17:38:46.530653 systemd[1]: run-netns-cni\x2deecf8a8a\x2d00cc\x2d1856\x2d8f79\x2de910a9124f10.mount: Deactivated successfully. Dec 12 17:38:48.506498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119005085.mount: Deactivated successfully. Dec 12 17:38:48.819430 containerd[1525]: time="2025-12-12T17:38:48.797033393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Dec 12 17:38:48.819430 containerd[1525]: time="2025-12-12T17:38:48.801245106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.321072147s" Dec 12 17:38:48.819430 containerd[1525]: time="2025-12-12T17:38:48.819374381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:38:48.819430 containerd[1525]: time="2025-12-12T17:38:48.818244481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:48.820566 containerd[1525]: time="2025-12-12T17:38:48.820456919Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:48.821725 containerd[1525]: time="2025-12-12T17:38:48.821679821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:38:48.835640 containerd[1525]: time="2025-12-12T17:38:48.835594902Z" level=info msg="CreateContainer within sandbox \"effcd48876b6ecc406fe959d645e82744b733341843f78287ad3360caf4ca120\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:38:48.844805 containerd[1525]: time="2025-12-12T17:38:48.843182314Z" level=info msg="Container b5a5dd9b93cb48bfd18017838c1b39c5751bbd7271396986e76f3633d6c03f9c: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:48.853379 containerd[1525]: time="2025-12-12T17:38:48.853325370Z" level=info msg="CreateContainer within sandbox \"effcd48876b6ecc406fe959d645e82744b733341843f78287ad3360caf4ca120\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b5a5dd9b93cb48bfd18017838c1b39c5751bbd7271396986e76f3633d6c03f9c\"" Dec 12 17:38:48.854300 containerd[1525]: time="2025-12-12T17:38:48.853837059Z" level=info msg="StartContainer for \"b5a5dd9b93cb48bfd18017838c1b39c5751bbd7271396986e76f3633d6c03f9c\"" Dec 12 17:38:48.855436 containerd[1525]: time="2025-12-12T17:38:48.855412406Z" level=info msg="connecting to shim b5a5dd9b93cb48bfd18017838c1b39c5751bbd7271396986e76f3633d6c03f9c" address="unix:///run/containerd/s/282cd40d7905d89fffdca861f0d8c5f2fe530b73de300de9d909146ca4683191" protocol=ttrpc version=3 Dec 12 17:38:48.903482 systemd[1]: Started cri-containerd-b5a5dd9b93cb48bfd18017838c1b39c5751bbd7271396986e76f3633d6c03f9c.scope - libcontainer container b5a5dd9b93cb48bfd18017838c1b39c5751bbd7271396986e76f3633d6c03f9c. Dec 12 17:38:48.998599 containerd[1525]: time="2025-12-12T17:38:48.998558851Z" level=info msg="StartContainer for \"b5a5dd9b93cb48bfd18017838c1b39c5751bbd7271396986e76f3633d6c03f9c\" returns successfully" Dec 12 17:38:49.120124 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:38:49.120242 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:38:49.373300 kubelet[2690]: I1212 17:38:49.373186 2690 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3830e22e-ffb4-4a0f-a01d-f92ac3a7b160-whisker-backend-key-pair\") pod \"3830e22e-ffb4-4a0f-a01d-f92ac3a7b160\" (UID: \"3830e22e-ffb4-4a0f-a01d-f92ac3a7b160\") " Dec 12 17:38:49.373300 kubelet[2690]: I1212 17:38:49.373228 2690 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6hp9\" (UniqueName: \"kubernetes.io/projected/3830e22e-ffb4-4a0f-a01d-f92ac3a7b160-kube-api-access-j6hp9\") pod \"3830e22e-ffb4-4a0f-a01d-f92ac3a7b160\" (UID: \"3830e22e-ffb4-4a0f-a01d-f92ac3a7b160\") " Dec 12 17:38:49.373300 kubelet[2690]: I1212 17:38:49.373251 2690 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3830e22e-ffb4-4a0f-a01d-f92ac3a7b160-whisker-ca-bundle\") pod \"3830e22e-ffb4-4a0f-a01d-f92ac3a7b160\" (UID: \"3830e22e-ffb4-4a0f-a01d-f92ac3a7b160\") " Dec 12 17:38:49.380524 kubelet[2690]: I1212 17:38:49.380476 2690 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3830e22e-ffb4-4a0f-a01d-f92ac3a7b160-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3830e22e-ffb4-4a0f-a01d-f92ac3a7b160" (UID: "3830e22e-ffb4-4a0f-a01d-f92ac3a7b160"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:38:49.381451 kubelet[2690]: I1212 17:38:49.380676 2690 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3830e22e-ffb4-4a0f-a01d-f92ac3a7b160-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3830e22e-ffb4-4a0f-a01d-f92ac3a7b160" (UID: "3830e22e-ffb4-4a0f-a01d-f92ac3a7b160"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:38:49.382513 kubelet[2690]: I1212 17:38:49.382466 2690 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3830e22e-ffb4-4a0f-a01d-f92ac3a7b160-kube-api-access-j6hp9" (OuterVolumeSpecName: "kube-api-access-j6hp9") pod "3830e22e-ffb4-4a0f-a01d-f92ac3a7b160" (UID: "3830e22e-ffb4-4a0f-a01d-f92ac3a7b160"). InnerVolumeSpecName "kube-api-access-j6hp9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:38:49.474440 kubelet[2690]: I1212 17:38:49.474394 2690 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3830e22e-ffb4-4a0f-a01d-f92ac3a7b160-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Dec 12 17:38:49.474440 kubelet[2690]: I1212 17:38:49.474428 2690 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j6hp9\" (UniqueName: \"kubernetes.io/projected/3830e22e-ffb4-4a0f-a01d-f92ac3a7b160-kube-api-access-j6hp9\") on node \"localhost\" DevicePath \"\"" Dec 12 17:38:49.474440 kubelet[2690]: I1212 17:38:49.474439 2690 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3830e22e-ffb4-4a0f-a01d-f92ac3a7b160-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Dec 12 17:38:49.496802 systemd[1]: Removed slice kubepods-besteffort-pod3830e22e_ffb4_4a0f_a01d_f92ac3a7b160.slice - libcontainer container kubepods-besteffort-pod3830e22e_ffb4_4a0f_a01d_f92ac3a7b160.slice. Dec 12 17:38:49.508119 systemd[1]: var-lib-kubelet-pods-3830e22e\x2dffb4\x2d4a0f\x2da01d\x2df92ac3a7b160-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj6hp9.mount: Deactivated successfully. Dec 12 17:38:49.508210 systemd[1]: var-lib-kubelet-pods-3830e22e\x2dffb4\x2d4a0f\x2da01d\x2df92ac3a7b160-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:38:49.519658 kubelet[2690]: I1212 17:38:49.519591 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ph66z" podStartSLOduration=1.6095237070000001 podStartE2EDuration="11.512845932s" podCreationTimestamp="2025-12-12 17:38:38 +0000 UTC" firstStartedPulling="2025-12-12 17:38:38.917330738 +0000 UTC m=+22.677849379" lastFinishedPulling="2025-12-12 17:38:48.820652963 +0000 UTC m=+32.581171604" observedRunningTime="2025-12-12 17:38:49.511489629 +0000 UTC m=+33.272008270" watchObservedRunningTime="2025-12-12 17:38:49.512845932 +0000 UTC m=+33.273364573" Dec 12 17:38:49.567043 systemd[1]: Created slice kubepods-besteffort-pod82dd8051_1568_452c_81da_df375ca13b0f.slice - libcontainer container kubepods-besteffort-pod82dd8051_1568_452c_81da_df375ca13b0f.slice. Dec 12 17:38:49.676429 kubelet[2690]: I1212 17:38:49.675955 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82dd8051-1568-452c-81da-df375ca13b0f-whisker-backend-key-pair\") pod \"whisker-66f8d7bb8d-b4hvj\" (UID: \"82dd8051-1568-452c-81da-df375ca13b0f\") " pod="calico-system/whisker-66f8d7bb8d-b4hvj" Dec 12 17:38:49.676429 kubelet[2690]: I1212 17:38:49.676041 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82dd8051-1568-452c-81da-df375ca13b0f-whisker-ca-bundle\") pod \"whisker-66f8d7bb8d-b4hvj\" (UID: \"82dd8051-1568-452c-81da-df375ca13b0f\") " pod="calico-system/whisker-66f8d7bb8d-b4hvj" Dec 12 17:38:49.676429 kubelet[2690]: I1212 17:38:49.676059 2690 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g92z\" (UniqueName: \"kubernetes.io/projected/82dd8051-1568-452c-81da-df375ca13b0f-kube-api-access-4g92z\") pod \"whisker-66f8d7bb8d-b4hvj\" (UID: \"82dd8051-1568-452c-81da-df375ca13b0f\") " pod="calico-system/whisker-66f8d7bb8d-b4hvj" Dec 12 17:38:49.874236 containerd[1525]: time="2025-12-12T17:38:49.874173881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f8d7bb8d-b4hvj,Uid:82dd8051-1568-452c-81da-df375ca13b0f,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:50.031094 systemd-networkd[1436]: cali58924d8e0bc: Link UP Dec 12 17:38:50.031739 systemd-networkd[1436]: cali58924d8e0bc: Gained carrier Dec 12 17:38:50.049287 containerd[1525]: 2025-12-12 17:38:49.895 [INFO][3830] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:38:50.049287 containerd[1525]: 2025-12-12 17:38:49.927 [INFO][3830] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--66f8d7bb8d--b4hvj-eth0 whisker-66f8d7bb8d- calico-system 82dd8051-1568-452c-81da-df375ca13b0f 895 0 2025-12-12 17:38:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66f8d7bb8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-66f8d7bb8d-b4hvj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali58924d8e0bc [] [] }} ContainerID="b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" Namespace="calico-system" Pod="whisker-66f8d7bb8d-b4hvj" WorkloadEndpoint="localhost-k8s-whisker--66f8d7bb8d--b4hvj-" Dec 12 17:38:50.049287 containerd[1525]: 2025-12-12 17:38:49.927 [INFO][3830] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" Namespace="calico-system" Pod="whisker-66f8d7bb8d-b4hvj" WorkloadEndpoint="localhost-k8s-whisker--66f8d7bb8d--b4hvj-eth0" Dec 12 17:38:50.049287 containerd[1525]: 2025-12-12 17:38:49.986 [INFO][3843] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" HandleID="k8s-pod-network.b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" Workload="localhost-k8s-whisker--66f8d7bb8d--b4hvj-eth0" Dec 12 17:38:50.049504 containerd[1525]: 2025-12-12 17:38:49.986 [INFO][3843] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" HandleID="k8s-pod-network.b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" Workload="localhost-k8s-whisker--66f8d7bb8d--b4hvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000514ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-66f8d7bb8d-b4hvj", "timestamp":"2025-12-12 17:38:49.986100321 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:38:50.049504 containerd[1525]: 2025-12-12 17:38:49.986 [INFO][3843] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:38:50.049504 containerd[1525]: 2025-12-12 17:38:49.986 [INFO][3843] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:38:50.049504 containerd[1525]: 2025-12-12 17:38:49.986 [INFO][3843] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:38:50.049504 containerd[1525]: 2025-12-12 17:38:49.997 [INFO][3843] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" host="localhost" Dec 12 17:38:50.049504 containerd[1525]: 2025-12-12 17:38:50.002 [INFO][3843] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:38:50.049504 containerd[1525]: 2025-12-12 17:38:50.007 [INFO][3843] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:38:50.049504 containerd[1525]: 2025-12-12 17:38:50.009 [INFO][3843] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:50.049504 containerd[1525]: 2025-12-12 17:38:50.011 [INFO][3843] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:50.049504 containerd[1525]: 2025-12-12 17:38:50.011 [INFO][3843] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" host="localhost" Dec 12 17:38:50.049744 containerd[1525]: 2025-12-12 17:38:50.012 [INFO][3843] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0 Dec 12 17:38:50.049744 containerd[1525]: 2025-12-12 17:38:50.016 [INFO][3843] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" host="localhost" Dec 12 17:38:50.049744 containerd[1525]: 2025-12-12 17:38:50.022 [INFO][3843] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" host="localhost" Dec 12 17:38:50.049744 containerd[1525]: 2025-12-12 17:38:50.022 [INFO][3843] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" host="localhost" Dec 12 17:38:50.049744 containerd[1525]: 2025-12-12 17:38:50.022 [INFO][3843] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:38:50.049744 containerd[1525]: 2025-12-12 17:38:50.022 [INFO][3843] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" HandleID="k8s-pod-network.b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" Workload="localhost-k8s-whisker--66f8d7bb8d--b4hvj-eth0" Dec 12 17:38:50.049859 containerd[1525]: 2025-12-12 17:38:50.024 [INFO][3830] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" Namespace="calico-system" Pod="whisker-66f8d7bb8d-b4hvj" WorkloadEndpoint="localhost-k8s-whisker--66f8d7bb8d--b4hvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66f8d7bb8d--b4hvj-eth0", GenerateName:"whisker-66f8d7bb8d-", Namespace:"calico-system", SelfLink:"", UID:"82dd8051-1568-452c-81da-df375ca13b0f", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66f8d7bb8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-66f8d7bb8d-b4hvj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali58924d8e0bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:50.049859 containerd[1525]: 2025-12-12 17:38:50.024 [INFO][3830] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" Namespace="calico-system" Pod="whisker-66f8d7bb8d-b4hvj" WorkloadEndpoint="localhost-k8s-whisker--66f8d7bb8d--b4hvj-eth0" Dec 12 17:38:50.049940 containerd[1525]: 2025-12-12 17:38:50.024 [INFO][3830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58924d8e0bc ContainerID="b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" Namespace="calico-system" Pod="whisker-66f8d7bb8d-b4hvj" WorkloadEndpoint="localhost-k8s-whisker--66f8d7bb8d--b4hvj-eth0" Dec 12 17:38:50.049940 containerd[1525]: 2025-12-12 17:38:50.032 [INFO][3830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" Namespace="calico-system" Pod="whisker-66f8d7bb8d-b4hvj" WorkloadEndpoint="localhost-k8s-whisker--66f8d7bb8d--b4hvj-eth0" Dec 12 17:38:50.049983 containerd[1525]: 2025-12-12 17:38:50.033 [INFO][3830] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" Namespace="calico-system" Pod="whisker-66f8d7bb8d-b4hvj" WorkloadEndpoint="localhost-k8s-whisker--66f8d7bb8d--b4hvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66f8d7bb8d--b4hvj-eth0", GenerateName:"whisker-66f8d7bb8d-", Namespace:"calico-system", SelfLink:"", UID:"82dd8051-1568-452c-81da-df375ca13b0f", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66f8d7bb8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0", Pod:"whisker-66f8d7bb8d-b4hvj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali58924d8e0bc", MAC:"d2:e0:d2:7c:72:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:50.050034 containerd[1525]: 2025-12-12 17:38:50.046 [INFO][3830] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" Namespace="calico-system" Pod="whisker-66f8d7bb8d-b4hvj" WorkloadEndpoint="localhost-k8s-whisker--66f8d7bb8d--b4hvj-eth0" Dec 12 17:38:50.192442 containerd[1525]: time="2025-12-12T17:38:50.192399085Z" level=info msg="connecting to shim b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0" address="unix:///run/containerd/s/d9af7f99a01b6405abdd4d1438a23f97c54ecc6e132466f3aad74f11e42145f2" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:50.225450 systemd[1]: Started cri-containerd-b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0.scope - libcontainer container b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0. Dec 12 17:38:50.236699 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:38:50.256624 containerd[1525]: time="2025-12-12T17:38:50.256379846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66f8d7bb8d-b4hvj,Uid:82dd8051-1568-452c-81da-df375ca13b0f,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8e948e759a49ad7c8cb5b2609e7ff945970e8990e9a09090af18958f48922d0\"" Dec 12 17:38:50.258963 containerd[1525]: time="2025-12-12T17:38:50.258918447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:38:50.355805 kubelet[2690]: I1212 17:38:50.355694 2690 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3830e22e-ffb4-4a0f-a01d-f92ac3a7b160" path="/var/lib/kubelet/pods/3830e22e-ffb4-4a0f-a01d-f92ac3a7b160/volumes" Dec 12 17:38:50.458790 containerd[1525]: time="2025-12-12T17:38:50.458732138Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:38:50.459933 containerd[1525]: time="2025-12-12T17:38:50.459861836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:38:50.459994 containerd[1525]: time="2025-12-12T17:38:50.459958518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:38:50.460172 kubelet[2690]: E1212 17:38:50.460135 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:38:50.460585 kubelet[2690]: E1212 17:38:50.460548 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:38:50.464360 kubelet[2690]: E1212 17:38:50.464313 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66f8d7bb8d-b4hvj_calico-system(82dd8051-1568-452c-81da-df375ca13b0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:38:50.465488 containerd[1525]: time="2025-12-12T17:38:50.465451527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:38:50.471386 kubelet[2690]: I1212 17:38:50.471363 2690 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:38:50.496396 kubelet[2690]: I1212 17:38:50.496352 2690 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:38:50.681956 containerd[1525]: time="2025-12-12T17:38:50.681909249Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:38:50.683199 containerd[1525]: time="2025-12-12T17:38:50.683128428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:38:50.683441 containerd[1525]: time="2025-12-12T17:38:50.683131189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:38:50.683576 kubelet[2690]: E1212 17:38:50.683532 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:38:50.683679 kubelet[2690]: E1212 17:38:50.683580 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:38:50.683679 kubelet[2690]: E1212 17:38:50.683649 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66f8d7bb8d-b4hvj_calico-system(82dd8051-1568-452c-81da-df375ca13b0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:38:50.683732 kubelet[2690]: E1212 17:38:50.683688 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f8d7bb8d-b4hvj" podUID="82dd8051-1568-452c-81da-df375ca13b0f" Dec 12 17:38:50.884405 systemd-networkd[1436]: vxlan.calico: Link UP Dec 12 17:38:50.887327 systemd-networkd[1436]: vxlan.calico: Gained carrier Dec 12 17:38:51.229437 systemd-networkd[1436]: cali58924d8e0bc: Gained IPv6LL Dec 12 17:38:51.504704 kubelet[2690]: E1212 17:38:51.504485 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f8d7bb8d-b4hvj" podUID="82dd8051-1568-452c-81da-df375ca13b0f" Dec 12 17:38:52.125429 systemd-networkd[1436]: vxlan.calico: Gained IPv6LL Dec 12 17:38:55.355195 containerd[1525]: time="2025-12-12T17:38:55.355154366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f84df79cc-drm9h,Uid:93c57314-3f67-4215-bf04-8f4e3f7c0b74,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:55.474676 systemd-networkd[1436]: cali7e8f4840638: Link UP Dec 12 17:38:55.475341 systemd-networkd[1436]: cali7e8f4840638: Gained carrier Dec 12 17:38:55.490294 containerd[1525]: 2025-12-12 17:38:55.396 [INFO][4114] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-eth0 calico-kube-controllers-7f84df79cc- calico-system 93c57314-3f67-4215-bf04-8f4e3f7c0b74 832 0 2025-12-12 17:38:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f84df79cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f84df79cc-drm9h eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7e8f4840638 [] [] }} ContainerID="6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" Namespace="calico-system" Pod="calico-kube-controllers-7f84df79cc-drm9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-" Dec 12 17:38:55.490294 containerd[1525]: 2025-12-12 17:38:55.396 [INFO][4114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" Namespace="calico-system" Pod="calico-kube-controllers-7f84df79cc-drm9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-eth0" Dec 12 17:38:55.490294 containerd[1525]: 2025-12-12 17:38:55.429 [INFO][4129] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" HandleID="k8s-pod-network.6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" Workload="localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-eth0" Dec 12 17:38:55.490559 containerd[1525]: 2025-12-12 17:38:55.429 [INFO][4129] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" HandleID="k8s-pod-network.6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" Workload="localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001361a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f84df79cc-drm9h", "timestamp":"2025-12-12 17:38:55.429580934 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:38:55.490559 containerd[1525]: 2025-12-12 17:38:55.429 [INFO][4129] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:38:55.490559 containerd[1525]: 2025-12-12 17:38:55.429 [INFO][4129] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:38:55.490559 containerd[1525]: 2025-12-12 17:38:55.429 [INFO][4129] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:38:55.490559 containerd[1525]: 2025-12-12 17:38:55.441 [INFO][4129] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" host="localhost" Dec 12 17:38:55.490559 containerd[1525]: 2025-12-12 17:38:55.447 [INFO][4129] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:38:55.490559 containerd[1525]: 2025-12-12 17:38:55.452 [INFO][4129] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:38:55.490559 containerd[1525]: 2025-12-12 17:38:55.454 [INFO][4129] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:55.490559 containerd[1525]: 2025-12-12 17:38:55.457 [INFO][4129] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:55.490559 containerd[1525]: 2025-12-12 17:38:55.457 [INFO][4129] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" host="localhost" Dec 12 17:38:55.490818 containerd[1525]: 2025-12-12 17:38:55.458 [INFO][4129] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6 Dec 12 17:38:55.490818 containerd[1525]: 2025-12-12 17:38:55.462 [INFO][4129] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" host="localhost" Dec 12 17:38:55.490818 containerd[1525]: 2025-12-12 17:38:55.468 [INFO][4129] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" host="localhost" Dec 12 17:38:55.490818 containerd[1525]: 2025-12-12 17:38:55.469 [INFO][4129] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" host="localhost" Dec 12 17:38:55.490818 containerd[1525]: 2025-12-12 17:38:55.469 [INFO][4129] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:38:55.490818 containerd[1525]: 2025-12-12 17:38:55.469 [INFO][4129] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" HandleID="k8s-pod-network.6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" Workload="localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-eth0" Dec 12 17:38:55.490951 containerd[1525]: 2025-12-12 17:38:55.472 [INFO][4114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" Namespace="calico-system" Pod="calico-kube-controllers-7f84df79cc-drm9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-eth0", GenerateName:"calico-kube-controllers-7f84df79cc-", Namespace:"calico-system", SelfLink:"", UID:"93c57314-3f67-4215-bf04-8f4e3f7c0b74", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f84df79cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f84df79cc-drm9h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7e8f4840638", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:55.491023 containerd[1525]: 2025-12-12 17:38:55.472 [INFO][4114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" Namespace="calico-system" Pod="calico-kube-controllers-7f84df79cc-drm9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-eth0" Dec 12 17:38:55.491023 containerd[1525]: 2025-12-12 17:38:55.472 [INFO][4114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e8f4840638 ContainerID="6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" Namespace="calico-system" Pod="calico-kube-controllers-7f84df79cc-drm9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-eth0" Dec 12 17:38:55.491023 containerd[1525]: 2025-12-12 17:38:55.475 [INFO][4114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" Namespace="calico-system" Pod="calico-kube-controllers-7f84df79cc-drm9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-eth0" Dec 12 17:38:55.491121 containerd[1525]: 2025-12-12 17:38:55.476 [INFO][4114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" Namespace="calico-system" Pod="calico-kube-controllers-7f84df79cc-drm9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-eth0", GenerateName:"calico-kube-controllers-7f84df79cc-", Namespace:"calico-system", SelfLink:"", UID:"93c57314-3f67-4215-bf04-8f4e3f7c0b74", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f84df79cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6", Pod:"calico-kube-controllers-7f84df79cc-drm9h", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7e8f4840638", MAC:"36:f3:31:c2:14:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:55.491173 containerd[1525]: 2025-12-12 17:38:55.486 [INFO][4114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" Namespace="calico-system" Pod="calico-kube-controllers-7f84df79cc-drm9h" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f84df79cc--drm9h-eth0" Dec 12 17:38:55.518934 containerd[1525]: time="2025-12-12T17:38:55.518811590Z" level=info msg="connecting to shim 6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6" address="unix:///run/containerd/s/4b4a7e9f108fda60a69bb9d4ce196b246c7c2660541b05f98aab95d50d70c81e" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:55.557495 systemd[1]: Started cri-containerd-6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6.scope - libcontainer container 6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6. Dec 12 17:38:55.569046 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:38:55.606864 containerd[1525]: time="2025-12-12T17:38:55.606748269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f84df79cc-drm9h,Uid:93c57314-3f67-4215-bf04-8f4e3f7c0b74,Namespace:calico-system,Attempt:0,} returns sandbox id \"6297dba6a1bfc7646da80bc4837a9dd0f32efd001838cc0a9d95bb732acf00e6\"" Dec 12 17:38:55.608838 containerd[1525]: time="2025-12-12T17:38:55.608808378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:38:55.794427 containerd[1525]: time="2025-12-12T17:38:55.794368871Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:38:55.795330 containerd[1525]: time="2025-12-12T17:38:55.795296124Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:38:55.795408 containerd[1525]: time="2025-12-12T17:38:55.795378565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:38:55.795529 kubelet[2690]: E1212 17:38:55.795495 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:38:55.795812 kubelet[2690]: E1212 17:38:55.795542 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:38:55.795812 kubelet[2690]: E1212 17:38:55.795620 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7f84df79cc-drm9h_calico-system(93c57314-3f67-4215-bf04-8f4e3f7c0b74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:38:55.795812 kubelet[2690]: E1212 17:38:55.795650 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f84df79cc-drm9h" podUID="93c57314-3f67-4215-bf04-8f4e3f7c0b74" Dec 12 17:38:56.006132 systemd[1]: Started sshd@7-10.0.0.93:22-10.0.0.1:47072.service - OpenSSH per-connection server daemon (10.0.0.1:47072). Dec 12 17:38:56.071594 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 47072 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:38:56.073056 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:38:56.077243 systemd-logind[1509]: New session 8 of user core. Dec 12 17:38:56.090433 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:38:56.286509 sshd[4198]: Connection closed by 10.0.0.1 port 47072 Dec 12 17:38:56.285950 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Dec 12 17:38:56.289717 systemd[1]: sshd@7-10.0.0.93:22-10.0.0.1:47072.service: Deactivated successfully. Dec 12 17:38:56.291797 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:38:56.292914 systemd-logind[1509]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:38:56.294344 systemd-logind[1509]: Removed session 8. Dec 12 17:38:56.356697 containerd[1525]: time="2025-12-12T17:38:56.356646541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f6g67,Uid:36c49735-9da0-46fd-8634-a3bd00152ea8,Namespace:calico-system,Attempt:0,}" Dec 12 17:38:56.487044 systemd-networkd[1436]: cali6e8b3fdb799: Link UP Dec 12 17:38:56.488103 systemd-networkd[1436]: cali6e8b3fdb799: Gained carrier Dec 12 17:38:56.502545 containerd[1525]: 2025-12-12 17:38:56.412 [INFO][4219] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--f6g67-eth0 goldmane-7c778bb748- calico-system 36c49735-9da0-46fd-8634-a3bd00152ea8 833 0 2025-12-12 17:38:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-f6g67 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6e8b3fdb799 [] [] }} ContainerID="4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" Namespace="calico-system" Pod="goldmane-7c778bb748-f6g67" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--f6g67-" Dec 12 17:38:56.502545 containerd[1525]: 2025-12-12 17:38:56.413 [INFO][4219] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" Namespace="calico-system" Pod="goldmane-7c778bb748-f6g67" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--f6g67-eth0" Dec 12 17:38:56.502545 containerd[1525]: 2025-12-12 17:38:56.444 [INFO][4234] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" HandleID="k8s-pod-network.4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" Workload="localhost-k8s-goldmane--7c778bb748--f6g67-eth0" Dec 12 17:38:56.502780 containerd[1525]: 2025-12-12 17:38:56.444 [INFO][4234] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" HandleID="k8s-pod-network.4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" Workload="localhost-k8s-goldmane--7c778bb748--f6g67-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd000), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-f6g67", "timestamp":"2025-12-12 17:38:56.444556627 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:38:56.502780 containerd[1525]: 2025-12-12 17:38:56.444 [INFO][4234] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:38:56.502780 containerd[1525]: 2025-12-12 17:38:56.444 [INFO][4234] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:38:56.502780 containerd[1525]: 2025-12-12 17:38:56.444 [INFO][4234] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:38:56.502780 containerd[1525]: 2025-12-12 17:38:56.454 [INFO][4234] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" host="localhost" Dec 12 17:38:56.502780 containerd[1525]: 2025-12-12 17:38:56.460 [INFO][4234] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:38:56.502780 containerd[1525]: 2025-12-12 17:38:56.464 [INFO][4234] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:38:56.502780 containerd[1525]: 2025-12-12 17:38:56.467 [INFO][4234] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:56.502780 containerd[1525]: 2025-12-12 17:38:56.469 [INFO][4234] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:56.502780 containerd[1525]: 2025-12-12 17:38:56.469 [INFO][4234] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" host="localhost" Dec 12 17:38:56.503958 containerd[1525]: 2025-12-12 17:38:56.470 [INFO][4234] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2 Dec 12 17:38:56.503958 containerd[1525]: 2025-12-12 17:38:56.474 [INFO][4234] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" host="localhost" Dec 12 17:38:56.503958 containerd[1525]: 2025-12-12 17:38:56.482 [INFO][4234] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" host="localhost" Dec 12 17:38:56.503958 containerd[1525]: 2025-12-12 17:38:56.482 [INFO][4234] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" host="localhost" Dec 12 17:38:56.503958 containerd[1525]: 2025-12-12 17:38:56.482 [INFO][4234] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:38:56.503958 containerd[1525]: 2025-12-12 17:38:56.482 [INFO][4234] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" HandleID="k8s-pod-network.4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" Workload="localhost-k8s-goldmane--7c778bb748--f6g67-eth0" Dec 12 17:38:56.504101 containerd[1525]: 2025-12-12 17:38:56.484 [INFO][4219] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" Namespace="calico-system" Pod="goldmane-7c778bb748-f6g67" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--f6g67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--f6g67-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"36c49735-9da0-46fd-8634-a3bd00152ea8", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-f6g67", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6e8b3fdb799", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:56.504101 containerd[1525]: 2025-12-12 17:38:56.484 [INFO][4219] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" Namespace="calico-system" Pod="goldmane-7c778bb748-f6g67" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--f6g67-eth0" Dec 12 17:38:56.504172 containerd[1525]: 2025-12-12 17:38:56.484 [INFO][4219] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e8b3fdb799 ContainerID="4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" Namespace="calico-system" Pod="goldmane-7c778bb748-f6g67" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--f6g67-eth0" Dec 12 17:38:56.504172 containerd[1525]: 2025-12-12 17:38:56.488 [INFO][4219] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" Namespace="calico-system" Pod="goldmane-7c778bb748-f6g67" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--f6g67-eth0" Dec 12 17:38:56.504215 containerd[1525]: 2025-12-12 17:38:56.488 [INFO][4219] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" Namespace="calico-system" Pod="goldmane-7c778bb748-f6g67" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--f6g67-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--f6g67-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"36c49735-9da0-46fd-8634-a3bd00152ea8", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2", Pod:"goldmane-7c778bb748-f6g67", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6e8b3fdb799", MAC:"22:b7:85:d7:05:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:56.505367 containerd[1525]: 2025-12-12 17:38:56.498 [INFO][4219] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" Namespace="calico-system" Pod="goldmane-7c778bb748-f6g67" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--f6g67-eth0" Dec 12 17:38:56.517350 kubelet[2690]: E1212 17:38:56.517304 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f84df79cc-drm9h" podUID="93c57314-3f67-4215-bf04-8f4e3f7c0b74" Dec 12 17:38:56.572853 containerd[1525]: time="2025-12-12T17:38:56.572731786Z" level=info msg="connecting to shim 4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2" address="unix:///run/containerd/s/93f0a8b93aa7ba1ac8653d23e47a2a256418a97cdb2856267281f3fd3a9c8fbe" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:56.608717 systemd[1]: Started cri-containerd-4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2.scope - libcontainer container 4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2. Dec 12 17:38:56.620446 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:38:56.660035 containerd[1525]: time="2025-12-12T17:38:56.659984983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f6g67,Uid:36c49735-9da0-46fd-8634-a3bd00152ea8,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d7aaf40296053ec2f70fc55a1ab81a88ece0dd196da2aba70d2e5152940ced2\"" Dec 12 17:38:56.661912 containerd[1525]: time="2025-12-12T17:38:56.661869329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:38:56.866593 containerd[1525]: time="2025-12-12T17:38:56.866553938Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:38:56.869042 containerd[1525]: time="2025-12-12T17:38:56.868960131Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:38:56.869042 containerd[1525]: time="2025-12-12T17:38:56.869010052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:38:56.872490 kubelet[2690]: E1212 17:38:56.872413 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:38:56.872777 kubelet[2690]: E1212 17:38:56.872500 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:38:56.872777 kubelet[2690]: E1212 17:38:56.872574 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-f6g67_calico-system(36c49735-9da0-46fd-8634-a3bd00152ea8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:38:56.872777 kubelet[2690]: E1212 17:38:56.872604 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f6g67" podUID="36c49735-9da0-46fd-8634-a3bd00152ea8" Dec 12 17:38:57.375606 containerd[1525]: time="2025-12-12T17:38:57.375568557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nj2j6,Uid:f5115889-44d5-4314-a9ff-4cb71017c01f,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:57.483994 systemd-networkd[1436]: cali658195653b0: Link UP Dec 12 17:38:57.484516 systemd-networkd[1436]: cali658195653b0: Gained carrier Dec 12 17:38:57.500910 containerd[1525]: 2025-12-12 17:38:57.413 [INFO][4302] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--nj2j6-eth0 coredns-66bc5c9577- kube-system f5115889-44d5-4314-a9ff-4cb71017c01f 831 0 2025-12-12 17:38:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-nj2j6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali658195653b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nj2j6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nj2j6-" Dec 12 17:38:57.500910 containerd[1525]: 2025-12-12 17:38:57.413 [INFO][4302] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nj2j6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nj2j6-eth0" Dec 12 17:38:57.500910 containerd[1525]: 2025-12-12 17:38:57.442 [INFO][4311] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" HandleID="k8s-pod-network.bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" Workload="localhost-k8s-coredns--66bc5c9577--nj2j6-eth0" Dec 12 17:38:57.501158 containerd[1525]: 2025-12-12 17:38:57.442 [INFO][4311] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" HandleID="k8s-pod-network.bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" Workload="localhost-k8s-coredns--66bc5c9577--nj2j6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c710), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-nj2j6", "timestamp":"2025-12-12 17:38:57.442065887 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:38:57.501158 containerd[1525]: 2025-12-12 17:38:57.442 [INFO][4311] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:38:57.501158 containerd[1525]: 2025-12-12 17:38:57.442 [INFO][4311] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:38:57.501158 containerd[1525]: 2025-12-12 17:38:57.442 [INFO][4311] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:38:57.501158 containerd[1525]: 2025-12-12 17:38:57.453 [INFO][4311] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" host="localhost" Dec 12 17:38:57.501158 containerd[1525]: 2025-12-12 17:38:57.457 [INFO][4311] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:38:57.501158 containerd[1525]: 2025-12-12 17:38:57.462 [INFO][4311] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:38:57.501158 containerd[1525]: 2025-12-12 17:38:57.464 [INFO][4311] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:57.501158 containerd[1525]: 2025-12-12 17:38:57.466 [INFO][4311] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:57.501158 containerd[1525]: 2025-12-12 17:38:57.466 [INFO][4311] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" host="localhost" Dec 12 17:38:57.501393 containerd[1525]: 2025-12-12 17:38:57.468 [INFO][4311] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f Dec 12 17:38:57.501393 containerd[1525]: 2025-12-12 17:38:57.471 [INFO][4311] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" host="localhost" Dec 12 17:38:57.501393 containerd[1525]: 2025-12-12 17:38:57.478 [INFO][4311] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" host="localhost" Dec 12 17:38:57.501393 containerd[1525]: 2025-12-12 17:38:57.478 [INFO][4311] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" host="localhost" Dec 12 17:38:57.501393 containerd[1525]: 2025-12-12 17:38:57.478 [INFO][4311] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:38:57.501393 containerd[1525]: 2025-12-12 17:38:57.478 [INFO][4311] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" HandleID="k8s-pod-network.bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" Workload="localhost-k8s-coredns--66bc5c9577--nj2j6-eth0" Dec 12 17:38:57.501700 systemd-networkd[1436]: cali7e8f4840638: Gained IPv6LL Dec 12 17:38:57.503016 containerd[1525]: 2025-12-12 17:38:57.480 [INFO][4302] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nj2j6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nj2j6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nj2j6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f5115889-44d5-4314-a9ff-4cb71017c01f", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-nj2j6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali658195653b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:57.503016 containerd[1525]: 2025-12-12 17:38:57.481 [INFO][4302] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nj2j6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nj2j6-eth0" Dec 12 17:38:57.503016 containerd[1525]: 2025-12-12 17:38:57.481 [INFO][4302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali658195653b0 ContainerID="bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nj2j6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nj2j6-eth0" Dec 12 17:38:57.503016 containerd[1525]: 2025-12-12 17:38:57.484 [INFO][4302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nj2j6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nj2j6-eth0" Dec 12 17:38:57.503016 containerd[1525]: 2025-12-12 17:38:57.488 [INFO][4302] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nj2j6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nj2j6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--nj2j6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f5115889-44d5-4314-a9ff-4cb71017c01f", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f", Pod:"coredns-66bc5c9577-nj2j6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali658195653b0", MAC:"da:73:3b:0f:62:23", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:57.503016 containerd[1525]: 2025-12-12 17:38:57.498 [INFO][4302] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" Namespace="kube-system" Pod="coredns-66bc5c9577-nj2j6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--nj2j6-eth0" Dec 12 17:38:57.519494 kubelet[2690]: E1212 17:38:57.519354 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f6g67" podUID="36c49735-9da0-46fd-8634-a3bd00152ea8" Dec 12 17:38:57.519494 kubelet[2690]: E1212 17:38:57.519409 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f84df79cc-drm9h" podUID="93c57314-3f67-4215-bf04-8f4e3f7c0b74" Dec 12 17:38:57.546341 containerd[1525]: time="2025-12-12T17:38:57.545563473Z" level=info msg="connecting to shim bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f" address="unix:///run/containerd/s/6ef7e673f2e7039fbe51ae6a698d0c030e28e6e978bb8e72f74713c46e60384a" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:57.584457 systemd[1]: Started cri-containerd-bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f.scope - libcontainer container bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f. Dec 12 17:38:57.602249 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:38:57.633413 containerd[1525]: time="2025-12-12T17:38:57.633246407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-nj2j6,Uid:f5115889-44d5-4314-a9ff-4cb71017c01f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f\"" Dec 12 17:38:57.642739 containerd[1525]: time="2025-12-12T17:38:57.642695173Z" level=info msg="CreateContainer within sandbox \"bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:38:57.655079 containerd[1525]: time="2025-12-12T17:38:57.654746494Z" level=info msg="Container 99f59b8135f8aef33bfb314fcaca65e3c76be9a71689beafcd53023d23d58942: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:57.657135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3004145210.mount: Deactivated successfully. Dec 12 17:38:57.664400 containerd[1525]: time="2025-12-12T17:38:57.664288182Z" level=info msg="CreateContainer within sandbox \"bfe414c3a9a430d7608283ea7a293d9f0fee1a627c1793b622ed708695153b3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99f59b8135f8aef33bfb314fcaca65e3c76be9a71689beafcd53023d23d58942\"" Dec 12 17:38:57.665106 containerd[1525]: time="2025-12-12T17:38:57.665071593Z" level=info msg="StartContainer for \"99f59b8135f8aef33bfb314fcaca65e3c76be9a71689beafcd53023d23d58942\"" Dec 12 17:38:57.667074 containerd[1525]: time="2025-12-12T17:38:57.667032059Z" level=info msg="connecting to shim 99f59b8135f8aef33bfb314fcaca65e3c76be9a71689beafcd53023d23d58942" address="unix:///run/containerd/s/6ef7e673f2e7039fbe51ae6a698d0c030e28e6e978bb8e72f74713c46e60384a" protocol=ttrpc version=3 Dec 12 17:38:57.688498 systemd[1]: Started cri-containerd-99f59b8135f8aef33bfb314fcaca65e3c76be9a71689beafcd53023d23d58942.scope - libcontainer container 99f59b8135f8aef33bfb314fcaca65e3c76be9a71689beafcd53023d23d58942. Dec 12 17:38:57.717593 containerd[1525]: time="2025-12-12T17:38:57.717537975Z" level=info msg="StartContainer for \"99f59b8135f8aef33bfb314fcaca65e3c76be9a71689beafcd53023d23d58942\" returns successfully" Dec 12 17:38:57.949407 systemd-networkd[1436]: cali6e8b3fdb799: Gained IPv6LL Dec 12 17:38:58.375487 containerd[1525]: time="2025-12-12T17:38:58.375433824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c89f84888-48bx8,Uid:228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:38:58.528463 kubelet[2690]: E1212 17:38:58.528413 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f6g67" podUID="36c49735-9da0-46fd-8634-a3bd00152ea8" Dec 12 17:38:58.596687 systemd-networkd[1436]: cali8ea5e5f901c: Link UP Dec 12 17:38:58.597526 systemd-networkd[1436]: cali8ea5e5f901c: Gained carrier Dec 12 17:38:58.609534 kubelet[2690]: I1212 17:38:58.609461 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nj2j6" podStartSLOduration=36.609443482 podStartE2EDuration="36.609443482s" podCreationTimestamp="2025-12-12 17:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:38:58.570479533 +0000 UTC m=+42.330998174" watchObservedRunningTime="2025-12-12 17:38:58.609443482 +0000 UTC m=+42.369962123" Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.495 [INFO][4410] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c89f84888--48bx8-eth0 calico-apiserver-c89f84888- calico-apiserver 228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f 835 0 2025-12-12 17:38:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c89f84888 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c89f84888-48bx8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8ea5e5f901c [] [] }} ContainerID="b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-48bx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--48bx8-" Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.495 [INFO][4410] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-48bx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--48bx8-eth0" Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.521 [INFO][4424] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" HandleID="k8s-pod-network.b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" Workload="localhost-k8s-calico--apiserver--c89f84888--48bx8-eth0" Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.522 [INFO][4424] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" HandleID="k8s-pod-network.b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" Workload="localhost-k8s-calico--apiserver--c89f84888--48bx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000511d00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c89f84888-48bx8", "timestamp":"2025-12-12 17:38:58.521891818 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.522 [INFO][4424] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.522 [INFO][4424] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.522 [INFO][4424] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.533 [INFO][4424] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" host="localhost" Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.553 [INFO][4424] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.559 [INFO][4424] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.564 [INFO][4424] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.571 [INFO][4424] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.571 [INFO][4424] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" host="localhost" Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.574 [INFO][4424] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207 Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.580 [INFO][4424] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" host="localhost" Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.590 [INFO][4424] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" host="localhost" Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.590 [INFO][4424] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" host="localhost" Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.590 [INFO][4424] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:38:58.614090 containerd[1525]: 2025-12-12 17:38:58.591 [INFO][4424] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" HandleID="k8s-pod-network.b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" Workload="localhost-k8s-calico--apiserver--c89f84888--48bx8-eth0" Dec 12 17:38:58.614959 containerd[1525]: 2025-12-12 17:38:58.594 [INFO][4410] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-48bx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--48bx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c89f84888--48bx8-eth0", GenerateName:"calico-apiserver-c89f84888-", Namespace:"calico-apiserver", SelfLink:"", UID:"228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c89f84888", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c89f84888-48bx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ea5e5f901c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:58.614959 containerd[1525]: 2025-12-12 17:38:58.594 [INFO][4410] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-48bx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--48bx8-eth0" Dec 12 17:38:58.614959 containerd[1525]: 2025-12-12 17:38:58.594 [INFO][4410] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ea5e5f901c ContainerID="b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-48bx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--48bx8-eth0" Dec 12 17:38:58.614959 containerd[1525]: 2025-12-12 17:38:58.597 [INFO][4410] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-48bx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--48bx8-eth0" Dec 12 17:38:58.614959 containerd[1525]: 2025-12-12 17:38:58.598 [INFO][4410] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-48bx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--48bx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c89f84888--48bx8-eth0", GenerateName:"calico-apiserver-c89f84888-", Namespace:"calico-apiserver", SelfLink:"", UID:"228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c89f84888", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207", Pod:"calico-apiserver-c89f84888-48bx8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8ea5e5f901c", MAC:"5e:ab:44:06:08:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:58.614959 containerd[1525]: 2025-12-12 17:38:58.611 [INFO][4410] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-48bx8" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--48bx8-eth0" Dec 12 17:38:58.647577 containerd[1525]: time="2025-12-12T17:38:58.646812291Z" level=info msg="connecting to shim b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207" address="unix:///run/containerd/s/f9fcc7d1132159876e3935b443ea42f20030e39554c4ceea2d1bef86936a2f11" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:58.672528 systemd[1]: Started cri-containerd-b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207.scope - libcontainer container b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207. Dec 12 17:38:58.684192 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:38:58.723147 containerd[1525]: time="2025-12-12T17:38:58.723106528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c89f84888-48bx8,Uid:228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b7a0ac7b1ad6350ebb5825e076438a27e3dc0133cd6b5e51af7bee9d5cfb0207\"" Dec 12 17:38:58.726147 containerd[1525]: time="2025-12-12T17:38:58.725647881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:38:58.939456 containerd[1525]: time="2025-12-12T17:38:58.939326794Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:38:58.940554 containerd[1525]: time="2025-12-12T17:38:58.940513890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:38:58.940554 containerd[1525]: time="2025-12-12T17:38:58.940579130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:38:58.940876 kubelet[2690]: E1212 17:38:58.940835 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:38:58.940951 kubelet[2690]: E1212 17:38:58.940885 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:38:58.940978 kubelet[2690]: E1212 17:38:58.940956 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c89f84888-48bx8_calico-apiserver(228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:38:58.941027 kubelet[2690]: E1212 17:38:58.940988 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c89f84888-48bx8" podUID="228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f" Dec 12 17:38:59.356615 containerd[1525]: time="2025-12-12T17:38:59.356500902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c89f84888-gmbxz,Uid:3b93b28f-eb51-435f-8d8f-9893e27d0902,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:38:59.357398 systemd-networkd[1436]: cali658195653b0: Gained IPv6LL Dec 12 17:38:59.359955 containerd[1525]: time="2025-12-12T17:38:59.359792424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zq448,Uid:865fadd1-ae3c-4183-8b3d-bbdec49a000f,Namespace:kube-system,Attempt:0,}" Dec 12 17:38:59.518106 systemd-networkd[1436]: cali7d2e9b772d0: Link UP Dec 12 17:38:59.518254 systemd-networkd[1436]: cali7d2e9b772d0: Gained carrier Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.427 [INFO][4496] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--zq448-eth0 coredns-66bc5c9577- kube-system 865fadd1-ae3c-4183-8b3d-bbdec49a000f 824 0 2025-12-12 17:38:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-zq448 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7d2e9b772d0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" Namespace="kube-system" Pod="coredns-66bc5c9577-zq448" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zq448-" Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.427 [INFO][4496] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" Namespace="kube-system" Pod="coredns-66bc5c9577-zq448" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zq448-eth0" Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.458 [INFO][4519] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" HandleID="k8s-pod-network.3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" Workload="localhost-k8s-coredns--66bc5c9577--zq448-eth0" Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.458 [INFO][4519] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" HandleID="k8s-pod-network.3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" Workload="localhost-k8s-coredns--66bc5c9577--zq448-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000132890), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-zq448", "timestamp":"2025-12-12 17:38:59.458335963 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.458 [INFO][4519] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.458 [INFO][4519] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.458 [INFO][4519] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.468 [INFO][4519] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" host="localhost" Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.473 [INFO][4519] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.479 [INFO][4519] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.481 [INFO][4519] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.484 [INFO][4519] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.484 [INFO][4519] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" host="localhost" Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.485 [INFO][4519] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.489 [INFO][4519] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" host="localhost" Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.499 [INFO][4519] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" host="localhost" Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.499 [INFO][4519] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" host="localhost" Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.499 [INFO][4519] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:38:59.538749 containerd[1525]: 2025-12-12 17:38:59.499 [INFO][4519] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" HandleID="k8s-pod-network.3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" Workload="localhost-k8s-coredns--66bc5c9577--zq448-eth0" Dec 12 17:38:59.540718 containerd[1525]: 2025-12-12 17:38:59.507 [INFO][4496] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" Namespace="kube-system" Pod="coredns-66bc5c9577-zq448" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zq448-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--zq448-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"865fadd1-ae3c-4183-8b3d-bbdec49a000f", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-zq448", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d2e9b772d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:59.540718 containerd[1525]: 2025-12-12 17:38:59.507 [INFO][4496] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" Namespace="kube-system" Pod="coredns-66bc5c9577-zq448" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zq448-eth0" Dec 12 17:38:59.540718 containerd[1525]: 2025-12-12 17:38:59.508 [INFO][4496] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d2e9b772d0 ContainerID="3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" Namespace="kube-system" Pod="coredns-66bc5c9577-zq448" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zq448-eth0" Dec 12 17:38:59.540718 containerd[1525]: 2025-12-12 17:38:59.515 [INFO][4496] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" Namespace="kube-system" Pod="coredns-66bc5c9577-zq448" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zq448-eth0" Dec 12 17:38:59.540718 containerd[1525]: 2025-12-12 17:38:59.516 [INFO][4496] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" Namespace="kube-system" Pod="coredns-66bc5c9577-zq448" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zq448-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--zq448-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"865fadd1-ae3c-4183-8b3d-bbdec49a000f", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f", Pod:"coredns-66bc5c9577-zq448", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7d2e9b772d0", MAC:"be:86:23:81:2d:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:59.540718 containerd[1525]: 2025-12-12 17:38:59.529 [INFO][4496] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" Namespace="kube-system" Pod="coredns-66bc5c9577-zq448" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--zq448-eth0" Dec 12 17:38:59.543748 kubelet[2690]: E1212 17:38:59.543520 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c89f84888-48bx8" podUID="228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f" Dec 12 17:38:59.574791 containerd[1525]: time="2025-12-12T17:38:59.574745010Z" level=info msg="connecting to shim 3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f" address="unix:///run/containerd/s/266b4ff02c1416ab14fb024592756ea802fcf703070f6d577ff79c5e43b87a86" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:59.604455 systemd[1]: Started cri-containerd-3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f.scope - libcontainer container 3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f. Dec 12 17:38:59.616291 systemd-networkd[1436]: calic98e75ef12d: Link UP Dec 12 17:38:59.616724 systemd-networkd[1436]: calic98e75ef12d: Gained carrier Dec 12 17:38:59.623773 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.427 [INFO][4489] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c89f84888--gmbxz-eth0 calico-apiserver-c89f84888- calico-apiserver 3b93b28f-eb51-435f-8d8f-9893e27d0902 828 0 2025-12-12 17:38:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c89f84888 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c89f84888-gmbxz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic98e75ef12d [] [] }} ContainerID="80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-gmbxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--gmbxz-" Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.427 [INFO][4489] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-gmbxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--gmbxz-eth0" Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.460 [INFO][4520] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" HandleID="k8s-pod-network.80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" Workload="localhost-k8s-calico--apiserver--c89f84888--gmbxz-eth0" Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.460 [INFO][4520] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" HandleID="k8s-pod-network.80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" Workload="localhost-k8s-calico--apiserver--c89f84888--gmbxz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c89f84888-gmbxz", "timestamp":"2025-12-12 17:38:59.460122505 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.460 [INFO][4520] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.500 [INFO][4520] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.500 [INFO][4520] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.571 [INFO][4520] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" host="localhost" Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.576 [INFO][4520] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.584 [INFO][4520] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.589 [INFO][4520] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.594 [INFO][4520] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.594 [INFO][4520] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" host="localhost" Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.596 [INFO][4520] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742 Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.601 [INFO][4520] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" host="localhost" Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.607 [INFO][4520] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" host="localhost" Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.607 [INFO][4520] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" host="localhost" Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.607 [INFO][4520] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:38:59.636612 containerd[1525]: 2025-12-12 17:38:59.608 [INFO][4520] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" HandleID="k8s-pod-network.80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" Workload="localhost-k8s-calico--apiserver--c89f84888--gmbxz-eth0" Dec 12 17:38:59.637931 containerd[1525]: 2025-12-12 17:38:59.611 [INFO][4489] cni-plugin/k8s.go 418: Populated endpoint ContainerID="80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-gmbxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--gmbxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c89f84888--gmbxz-eth0", GenerateName:"calico-apiserver-c89f84888-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b93b28f-eb51-435f-8d8f-9893e27d0902", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c89f84888", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c89f84888-gmbxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic98e75ef12d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:59.637931 containerd[1525]: 2025-12-12 17:38:59.612 [INFO][4489] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-gmbxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--gmbxz-eth0" Dec 12 17:38:59.637931 containerd[1525]: 2025-12-12 17:38:59.612 [INFO][4489] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic98e75ef12d ContainerID="80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-gmbxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--gmbxz-eth0" Dec 12 17:38:59.637931 containerd[1525]: 2025-12-12 17:38:59.616 [INFO][4489] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-gmbxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--gmbxz-eth0" Dec 12 17:38:59.637931 containerd[1525]: 2025-12-12 17:38:59.617 [INFO][4489] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-gmbxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--gmbxz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c89f84888--gmbxz-eth0", GenerateName:"calico-apiserver-c89f84888-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b93b28f-eb51-435f-8d8f-9893e27d0902", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c89f84888", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742", Pod:"calico-apiserver-c89f84888-gmbxz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic98e75ef12d", MAC:"6e:71:a1:16:92:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:38:59.637931 containerd[1525]: 2025-12-12 17:38:59.632 [INFO][4489] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" Namespace="calico-apiserver" Pod="calico-apiserver-c89f84888-gmbxz" WorkloadEndpoint="localhost-k8s-calico--apiserver--c89f84888--gmbxz-eth0" Dec 12 17:38:59.652329 containerd[1525]: time="2025-12-12T17:38:59.652246160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zq448,Uid:865fadd1-ae3c-4183-8b3d-bbdec49a000f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f\"" Dec 12 17:38:59.659634 containerd[1525]: time="2025-12-12T17:38:59.659598574Z" level=info msg="CreateContainer within sandbox \"3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:38:59.665865 containerd[1525]: time="2025-12-12T17:38:59.665815453Z" level=info msg="connecting to shim 80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742" address="unix:///run/containerd/s/13d3d1b0e530803459ca9df8998b61e0edb04aff079a063fb2a1341061fd0577" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:38:59.669670 containerd[1525]: time="2025-12-12T17:38:59.669624622Z" level=info msg="Container b078f71b4377a55d1a21a80151390ddebfbc271bcd8beda744bc1d4ce0b11533: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:38:59.678586 systemd-networkd[1436]: cali8ea5e5f901c: Gained IPv6LL Dec 12 17:38:59.680803 containerd[1525]: time="2025-12-12T17:38:59.680762444Z" level=info msg="CreateContainer within sandbox \"3a76b9f1be6bc2b0fcb3bf4722bf9b01e288948c0e0d8241230accce1fa6da8f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b078f71b4377a55d1a21a80151390ddebfbc271bcd8beda744bc1d4ce0b11533\"" Dec 12 17:38:59.681590 containerd[1525]: time="2025-12-12T17:38:59.681562294Z" level=info msg="StartContainer for \"b078f71b4377a55d1a21a80151390ddebfbc271bcd8beda744bc1d4ce0b11533\"" Dec 12 17:38:59.682951 containerd[1525]: time="2025-12-12T17:38:59.682887791Z" level=info msg="connecting to shim b078f71b4377a55d1a21a80151390ddebfbc271bcd8beda744bc1d4ce0b11533" address="unix:///run/containerd/s/266b4ff02c1416ab14fb024592756ea802fcf703070f6d577ff79c5e43b87a86" protocol=ttrpc version=3 Dec 12 17:38:59.694464 systemd[1]: Started cri-containerd-80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742.scope - libcontainer container 80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742. Dec 12 17:38:59.699866 systemd[1]: Started cri-containerd-b078f71b4377a55d1a21a80151390ddebfbc271bcd8beda744bc1d4ce0b11533.scope - libcontainer container b078f71b4377a55d1a21a80151390ddebfbc271bcd8beda744bc1d4ce0b11533. Dec 12 17:38:59.706856 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:38:59.748823 containerd[1525]: time="2025-12-12T17:38:59.748776233Z" level=info msg="StartContainer for \"b078f71b4377a55d1a21a80151390ddebfbc271bcd8beda744bc1d4ce0b11533\" returns successfully" Dec 12 17:38:59.749056 containerd[1525]: time="2025-12-12T17:38:59.748999476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c89f84888-gmbxz,Uid:3b93b28f-eb51-435f-8d8f-9893e27d0902,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"80d772ae53be788371689e5b410160f899465ccacd2a322ef5ef0435dfe96742\"" Dec 12 17:38:59.751178 containerd[1525]: time="2025-12-12T17:38:59.751153983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:38:59.960282 containerd[1525]: time="2025-12-12T17:38:59.960132413Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:38:59.961457 containerd[1525]: time="2025-12-12T17:38:59.961411669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:38:59.961513 containerd[1525]: time="2025-12-12T17:38:59.961485190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:38:59.961669 kubelet[2690]: E1212 17:38:59.961630 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:38:59.961733 kubelet[2690]: E1212 17:38:59.961682 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:38:59.961782 kubelet[2690]: E1212 17:38:59.961762 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c89f84888-gmbxz_calico-apiserver(3b93b28f-eb51-435f-8d8f-9893e27d0902): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:38:59.961819 kubelet[2690]: E1212 17:38:59.961798 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c89f84888-gmbxz" podUID="3b93b28f-eb51-435f-8d8f-9893e27d0902" Dec 12 17:39:00.381931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount809671040.mount: Deactivated successfully. Dec 12 17:39:00.547322 kubelet[2690]: E1212 17:39:00.546047 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c89f84888-gmbxz" podUID="3b93b28f-eb51-435f-8d8f-9893e27d0902" Dec 12 17:39:00.549238 kubelet[2690]: E1212 17:39:00.549124 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c89f84888-48bx8" podUID="228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f" Dec 12 17:39:00.587246 kubelet[2690]: I1212 17:39:00.587181 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zq448" podStartSLOduration=38.587166221 podStartE2EDuration="38.587166221s" podCreationTimestamp="2025-12-12 17:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:39:00.586966259 +0000 UTC m=+44.347484900" watchObservedRunningTime="2025-12-12 17:39:00.587166221 +0000 UTC m=+44.347684862" Dec 12 17:39:00.957487 systemd-networkd[1436]: calic98e75ef12d: Gained IPv6LL Dec 12 17:39:00.958172 systemd-networkd[1436]: cali7d2e9b772d0: Gained IPv6LL Dec 12 17:39:01.311607 systemd[1]: Started sshd@8-10.0.0.93:22-10.0.0.1:60584.service - OpenSSH per-connection server daemon (10.0.0.1:60584). Dec 12 17:39:01.356188 containerd[1525]: time="2025-12-12T17:39:01.355845416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clxcs,Uid:41d383b7-79ce-4986-93ed-9df24d00cb6a,Namespace:calico-system,Attempt:0,}" Dec 12 17:39:01.392233 sshd[4692]: Accepted publickey for core from 10.0.0.1 port 60584 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:01.394192 sshd-session[4692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:01.403661 systemd-logind[1509]: New session 9 of user core. Dec 12 17:39:01.408769 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:39:01.491246 systemd-networkd[1436]: caliddd61b5727a: Link UP Dec 12 17:39:01.491889 systemd-networkd[1436]: caliddd61b5727a: Gained carrier Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.408 [INFO][4696] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--clxcs-eth0 csi-node-driver- calico-system 41d383b7-79ce-4986-93ed-9df24d00cb6a 744 0 2025-12-12 17:38:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-clxcs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliddd61b5727a [] [] }} ContainerID="9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" Namespace="calico-system" Pod="csi-node-driver-clxcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--clxcs-" Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.408 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" Namespace="calico-system" Pod="csi-node-driver-clxcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--clxcs-eth0" Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.433 [INFO][4713] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" HandleID="k8s-pod-network.9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" Workload="localhost-k8s-csi--node--driver--clxcs-eth0" Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.434 [INFO][4713] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" HandleID="k8s-pod-network.9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" Workload="localhost-k8s-csi--node--driver--clxcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-clxcs", "timestamp":"2025-12-12 17:39:01.433898011 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.434 [INFO][4713] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.434 [INFO][4713] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.434 [INFO][4713] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.444 [INFO][4713] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" host="localhost" Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.450 [INFO][4713] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.456 [INFO][4713] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.458 [INFO][4713] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.464 [INFO][4713] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.464 [INFO][4713] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" host="localhost" Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.468 [INFO][4713] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984 Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.474 [INFO][4713] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" host="localhost" Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.485 [INFO][4713] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" host="localhost" Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.485 [INFO][4713] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" host="localhost" Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.486 [INFO][4713] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:39:01.510432 containerd[1525]: 2025-12-12 17:39:01.486 [INFO][4713] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" HandleID="k8s-pod-network.9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" Workload="localhost-k8s-csi--node--driver--clxcs-eth0" Dec 12 17:39:01.510954 containerd[1525]: 2025-12-12 17:39:01.488 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" Namespace="calico-system" Pod="csi-node-driver-clxcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--clxcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--clxcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41d383b7-79ce-4986-93ed-9df24d00cb6a", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-clxcs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliddd61b5727a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:01.510954 containerd[1525]: 2025-12-12 17:39:01.488 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" Namespace="calico-system" Pod="csi-node-driver-clxcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--clxcs-eth0" Dec 12 17:39:01.510954 containerd[1525]: 2025-12-12 17:39:01.488 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddd61b5727a ContainerID="9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" Namespace="calico-system" Pod="csi-node-driver-clxcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--clxcs-eth0" Dec 12 17:39:01.510954 containerd[1525]: 2025-12-12 17:39:01.492 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" Namespace="calico-system" Pod="csi-node-driver-clxcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--clxcs-eth0" Dec 12 17:39:01.510954 containerd[1525]: 2025-12-12 17:39:01.494 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" Namespace="calico-system" Pod="csi-node-driver-clxcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--clxcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--clxcs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41d383b7-79ce-4986-93ed-9df24d00cb6a", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 38, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984", Pod:"csi-node-driver-clxcs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliddd61b5727a", MAC:"ee:2d:be:0a:1d:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:39:01.510954 containerd[1525]: 2025-12-12 17:39:01.506 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" Namespace="calico-system" Pod="csi-node-driver-clxcs" WorkloadEndpoint="localhost-k8s-csi--node--driver--clxcs-eth0" Dec 12 17:39:01.539845 containerd[1525]: time="2025-12-12T17:39:01.539469383Z" level=info msg="connecting to shim 9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984" address="unix:///run/containerd/s/a5bdb287664808ec799e8d9261ed97647a904ba62dc2adae1922c27583bdc9bd" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:39:01.553937 kubelet[2690]: E1212 17:39:01.553899 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c89f84888-gmbxz" podUID="3b93b28f-eb51-435f-8d8f-9893e27d0902" Dec 12 17:39:01.567427 systemd[1]: Started cri-containerd-9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984.scope - libcontainer container 9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984. Dec 12 17:39:01.579672 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 12 17:39:01.598640 containerd[1525]: time="2025-12-12T17:39:01.598483945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clxcs,Uid:41d383b7-79ce-4986-93ed-9df24d00cb6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"9bdf74057f16bbfb643d46045fa27abc4139635afe0ed50589da817683ebe984\"" Dec 12 17:39:01.603836 containerd[1525]: time="2025-12-12T17:39:01.603794210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:39:01.677045 sshd[4711]: Connection closed by 10.0.0.1 port 60584 Dec 12 17:39:01.677483 sshd-session[4692]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:01.681945 systemd[1]: sshd@8-10.0.0.93:22-10.0.0.1:60584.service: Deactivated successfully. Dec 12 17:39:01.683665 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:39:01.684857 systemd-logind[1509]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:39:01.685893 systemd-logind[1509]: Removed session 9. Dec 12 17:39:01.825396 containerd[1525]: time="2025-12-12T17:39:01.825231080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:01.826469 containerd[1525]: time="2025-12-12T17:39:01.826410814Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:39:01.826555 containerd[1525]: time="2025-12-12T17:39:01.826434455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:39:01.826802 kubelet[2690]: E1212 17:39:01.826709 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:39:01.826802 kubelet[2690]: E1212 17:39:01.826780 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:39:01.827038 kubelet[2690]: E1212 17:39:01.826994 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-clxcs_calico-system(41d383b7-79ce-4986-93ed-9df24d00cb6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:01.829625 containerd[1525]: time="2025-12-12T17:39:01.829593973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:39:02.046840 containerd[1525]: time="2025-12-12T17:39:02.046794060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:02.048775 containerd[1525]: time="2025-12-12T17:39:02.048725203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:39:02.048898 containerd[1525]: time="2025-12-12T17:39:02.048809804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:39:02.049085 kubelet[2690]: E1212 17:39:02.049029 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:39:02.049281 kubelet[2690]: E1212 17:39:02.049168 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:39:02.049366 kubelet[2690]: E1212 17:39:02.049346 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-clxcs_calico-system(41d383b7-79ce-4986-93ed-9df24d00cb6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:02.049496 kubelet[2690]: E1212 17:39:02.049467 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-clxcs" podUID="41d383b7-79ce-4986-93ed-9df24d00cb6a" Dec 12 17:39:02.560488 kubelet[2690]: E1212 17:39:02.559720 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-clxcs" podUID="41d383b7-79ce-4986-93ed-9df24d00cb6a" Dec 12 17:39:02.752171 systemd-networkd[1436]: caliddd61b5727a: Gained IPv6LL Dec 12 17:39:03.145907 kubelet[2690]: I1212 17:39:03.145732 2690 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:39:03.560587 kubelet[2690]: E1212 17:39:03.560473 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-clxcs" podUID="41d383b7-79ce-4986-93ed-9df24d00cb6a" Dec 12 17:39:06.362189 containerd[1525]: time="2025-12-12T17:39:06.355069365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:39:06.562465 containerd[1525]: time="2025-12-12T17:39:06.562393759Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:06.574515 containerd[1525]: time="2025-12-12T17:39:06.574454494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:39:06.575306 containerd[1525]: time="2025-12-12T17:39:06.574528175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Dec 12 17:39:06.575361 kubelet[2690]: E1212 17:39:06.575206 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:39:06.575361 kubelet[2690]: E1212 17:39:06.575249 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:39:06.575361 kubelet[2690]: E1212 17:39:06.575338 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-66f8d7bb8d-b4hvj_calico-system(82dd8051-1568-452c-81da-df375ca13b0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:06.579828 containerd[1525]: time="2025-12-12T17:39:06.579797034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:39:06.699657 systemd[1]: Started sshd@9-10.0.0.93:22-10.0.0.1:60594.service - OpenSSH per-connection server daemon (10.0.0.1:60594). Dec 12 17:39:06.758076 sshd[4845]: Accepted publickey for core from 10.0.0.1 port 60594 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:06.759546 sshd-session[4845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:06.764366 systemd-logind[1509]: New session 10 of user core. Dec 12 17:39:06.770465 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:39:06.803769 containerd[1525]: time="2025-12-12T17:39:06.803721653Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:06.817672 containerd[1525]: time="2025-12-12T17:39:06.817605928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:39:06.817806 containerd[1525]: time="2025-12-12T17:39:06.817705169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Dec 12 17:39:06.817992 kubelet[2690]: E1212 17:39:06.817953 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:39:06.818163 kubelet[2690]: E1212 17:39:06.818053 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:39:06.818342 kubelet[2690]: E1212 17:39:06.818253 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-66f8d7bb8d-b4hvj_calico-system(82dd8051-1568-452c-81da-df375ca13b0f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:06.818538 kubelet[2690]: E1212 17:39:06.818477 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f8d7bb8d-b4hvj" podUID="82dd8051-1568-452c-81da-df375ca13b0f" Dec 12 17:39:06.963978 sshd[4848]: Connection closed by 10.0.0.1 port 60594 Dec 12 17:39:06.964492 sshd-session[4845]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:06.976429 systemd[1]: sshd@9-10.0.0.93:22-10.0.0.1:60594.service: Deactivated successfully. Dec 12 17:39:06.980003 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:39:06.984499 systemd-logind[1509]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:39:06.988652 systemd[1]: Started sshd@10-10.0.0.93:22-10.0.0.1:60600.service - OpenSSH per-connection server daemon (10.0.0.1:60600). Dec 12 17:39:06.989998 systemd-logind[1509]: Removed session 10. Dec 12 17:39:07.054014 sshd[4863]: Accepted publickey for core from 10.0.0.1 port 60600 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:07.055517 sshd-session[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:07.061373 systemd-logind[1509]: New session 11 of user core. Dec 12 17:39:07.070485 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:39:07.260598 sshd[4866]: Connection closed by 10.0.0.1 port 60600 Dec 12 17:39:07.260889 sshd-session[4863]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:07.273301 systemd[1]: sshd@10-10.0.0.93:22-10.0.0.1:60600.service: Deactivated successfully. Dec 12 17:39:07.277691 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:39:07.279678 systemd-logind[1509]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:39:07.287601 systemd[1]: Started sshd@11-10.0.0.93:22-10.0.0.1:60614.service - OpenSSH per-connection server daemon (10.0.0.1:60614). Dec 12 17:39:07.288526 systemd-logind[1509]: Removed session 11. Dec 12 17:39:07.350019 sshd[4879]: Accepted publickey for core from 10.0.0.1 port 60614 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:07.351583 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:07.359994 systemd-logind[1509]: New session 12 of user core. Dec 12 17:39:07.365488 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:39:07.546065 sshd[4882]: Connection closed by 10.0.0.1 port 60614 Dec 12 17:39:07.546108 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:07.549704 systemd[1]: sshd@11-10.0.0.93:22-10.0.0.1:60614.service: Deactivated successfully. Dec 12 17:39:07.551506 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:39:07.553251 systemd-logind[1509]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:39:07.554991 systemd-logind[1509]: Removed session 12. Dec 12 17:39:11.354758 containerd[1525]: time="2025-12-12T17:39:11.354711000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:39:11.566583 containerd[1525]: time="2025-12-12T17:39:11.566519079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:11.567941 containerd[1525]: time="2025-12-12T17:39:11.567811133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:39:11.567941 containerd[1525]: time="2025-12-12T17:39:11.567867053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Dec 12 17:39:11.568157 kubelet[2690]: E1212 17:39:11.568095 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:39:11.569876 kubelet[2690]: E1212 17:39:11.568155 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:39:11.569876 kubelet[2690]: E1212 17:39:11.568323 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7f84df79cc-drm9h_calico-system(93c57314-3f67-4215-bf04-8f4e3f7c0b74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:11.569876 kubelet[2690]: E1212 17:39:11.568371 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f84df79cc-drm9h" podUID="93c57314-3f67-4215-bf04-8f4e3f7c0b74" Dec 12 17:39:11.570000 containerd[1525]: time="2025-12-12T17:39:11.568885104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:39:11.764863 containerd[1525]: time="2025-12-12T17:39:11.764658057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:11.766675 containerd[1525]: time="2025-12-12T17:39:11.765953870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:39:11.766675 containerd[1525]: time="2025-12-12T17:39:11.766088752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:39:11.767062 kubelet[2690]: E1212 17:39:11.766361 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:11.767062 kubelet[2690]: E1212 17:39:11.766645 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:11.767062 kubelet[2690]: E1212 17:39:11.766896 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c89f84888-48bx8_calico-apiserver(228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:11.767062 kubelet[2690]: E1212 17:39:11.767023 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c89f84888-48bx8" podUID="228dc218-a1ca-4ef4-8bc7-2ef72ef3f04f" Dec 12 17:39:12.354007 containerd[1525]: time="2025-12-12T17:39:12.353416565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:39:12.565288 systemd[1]: Started sshd@12-10.0.0.93:22-10.0.0.1:41898.service - OpenSSH per-connection server daemon (10.0.0.1:41898). Dec 12 17:39:12.595397 containerd[1525]: time="2025-12-12T17:39:12.595304406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:12.596418 containerd[1525]: time="2025-12-12T17:39:12.596359857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:39:12.596480 containerd[1525]: time="2025-12-12T17:39:12.596450738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Dec 12 17:39:12.598294 kubelet[2690]: E1212 17:39:12.597329 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:12.598294 kubelet[2690]: E1212 17:39:12.597378 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:39:12.598294 kubelet[2690]: E1212 17:39:12.597448 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-c89f84888-gmbxz_calico-apiserver(3b93b28f-eb51-435f-8d8f-9893e27d0902): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:12.598294 kubelet[2690]: E1212 17:39:12.597481 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c89f84888-gmbxz" podUID="3b93b28f-eb51-435f-8d8f-9893e27d0902" Dec 12 17:39:12.635611 sshd[4909]: Accepted publickey for core from 10.0.0.1 port 41898 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:12.640176 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:12.644498 systemd-logind[1509]: New session 13 of user core. Dec 12 17:39:12.656502 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:39:12.785240 sshd[4912]: Connection closed by 10.0.0.1 port 41898 Dec 12 17:39:12.785709 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:12.798486 systemd[1]: sshd@12-10.0.0.93:22-10.0.0.1:41898.service: Deactivated successfully. Dec 12 17:39:12.802228 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:39:12.803422 systemd-logind[1509]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:39:12.807040 systemd[1]: Started sshd@13-10.0.0.93:22-10.0.0.1:41906.service - OpenSSH per-connection server daemon (10.0.0.1:41906). Dec 12 17:39:12.811455 systemd-logind[1509]: Removed session 13. Dec 12 17:39:12.876459 sshd[4926]: Accepted publickey for core from 10.0.0.1 port 41906 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:12.877987 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:12.882399 systemd-logind[1509]: New session 14 of user core. Dec 12 17:39:12.899479 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:39:13.124795 sshd[4929]: Connection closed by 10.0.0.1 port 41906 Dec 12 17:39:13.125248 sshd-session[4926]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:13.139090 systemd[1]: sshd@13-10.0.0.93:22-10.0.0.1:41906.service: Deactivated successfully. Dec 12 17:39:13.140892 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:39:13.141626 systemd-logind[1509]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:39:13.144953 systemd[1]: Started sshd@14-10.0.0.93:22-10.0.0.1:41908.service - OpenSSH per-connection server daemon (10.0.0.1:41908). Dec 12 17:39:13.145882 systemd-logind[1509]: Removed session 14. Dec 12 17:39:13.215497 sshd[4940]: Accepted publickey for core from 10.0.0.1 port 41908 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:13.217055 sshd-session[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:13.222004 systemd-logind[1509]: New session 15 of user core. Dec 12 17:39:13.231467 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:39:13.353403 containerd[1525]: time="2025-12-12T17:39:13.353363778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:39:13.603486 containerd[1525]: time="2025-12-12T17:39:13.603366633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:13.620315 containerd[1525]: time="2025-12-12T17:39:13.620238244Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:39:13.620431 containerd[1525]: time="2025-12-12T17:39:13.620289564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Dec 12 17:39:13.620678 kubelet[2690]: E1212 17:39:13.620563 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:39:13.620678 kubelet[2690]: E1212 17:39:13.620616 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:39:13.621925 kubelet[2690]: E1212 17:39:13.620703 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-f6g67_calico-system(36c49735-9da0-46fd-8634-a3bd00152ea8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:13.621925 kubelet[2690]: E1212 17:39:13.620769 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f6g67" podUID="36c49735-9da0-46fd-8634-a3bd00152ea8" Dec 12 17:39:13.921875 sshd[4943]: Connection closed by 10.0.0.1 port 41908 Dec 12 17:39:13.923193 sshd-session[4940]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:13.937906 systemd[1]: sshd@14-10.0.0.93:22-10.0.0.1:41908.service: Deactivated successfully. Dec 12 17:39:13.942078 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:39:13.943017 systemd-logind[1509]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:39:13.947599 systemd[1]: Started sshd@15-10.0.0.93:22-10.0.0.1:41914.service - OpenSSH per-connection server daemon (10.0.0.1:41914). Dec 12 17:39:13.951672 systemd-logind[1509]: Removed session 15. Dec 12 17:39:14.005352 sshd[4971]: Accepted publickey for core from 10.0.0.1 port 41914 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:14.007203 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:14.011590 systemd-logind[1509]: New session 16 of user core. Dec 12 17:39:14.022478 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:39:14.355962 sshd[4974]: Connection closed by 10.0.0.1 port 41914 Dec 12 17:39:14.356216 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:14.370088 systemd[1]: sshd@15-10.0.0.93:22-10.0.0.1:41914.service: Deactivated successfully. Dec 12 17:39:14.374170 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:39:14.377703 systemd-logind[1509]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:39:14.383933 systemd[1]: Started sshd@16-10.0.0.93:22-10.0.0.1:41924.service - OpenSSH per-connection server daemon (10.0.0.1:41924). Dec 12 17:39:14.385306 systemd-logind[1509]: Removed session 16. Dec 12 17:39:14.444003 sshd[4986]: Accepted publickey for core from 10.0.0.1 port 41924 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:14.445640 sshd-session[4986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:14.449760 systemd-logind[1509]: New session 17 of user core. Dec 12 17:39:14.458485 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:39:14.589327 sshd[4989]: Connection closed by 10.0.0.1 port 41924 Dec 12 17:39:14.589893 sshd-session[4986]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:14.593583 systemd[1]: sshd@16-10.0.0.93:22-10.0.0.1:41924.service: Deactivated successfully. Dec 12 17:39:14.596020 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:39:14.598194 systemd-logind[1509]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:39:14.599745 systemd-logind[1509]: Removed session 17. Dec 12 17:39:15.354609 containerd[1525]: time="2025-12-12T17:39:15.354562314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:39:15.567077 containerd[1525]: time="2025-12-12T17:39:15.567005341Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:15.577661 containerd[1525]: time="2025-12-12T17:39:15.577585006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:39:15.577801 containerd[1525]: time="2025-12-12T17:39:15.577688527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Dec 12 17:39:15.577886 kubelet[2690]: E1212 17:39:15.577848 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:39:15.578243 kubelet[2690]: E1212 17:39:15.577896 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:39:15.578243 kubelet[2690]: E1212 17:39:15.577969 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-clxcs_calico-system(41d383b7-79ce-4986-93ed-9df24d00cb6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:15.579027 containerd[1525]: time="2025-12-12T17:39:15.578974819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:39:15.779890 containerd[1525]: time="2025-12-12T17:39:15.779835852Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:39:15.781212 containerd[1525]: time="2025-12-12T17:39:15.781170865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:39:15.781327 containerd[1525]: time="2025-12-12T17:39:15.781276946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Dec 12 17:39:15.781474 kubelet[2690]: E1212 17:39:15.781433 2690 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:39:15.781547 kubelet[2690]: E1212 17:39:15.781487 2690 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:39:15.781669 kubelet[2690]: E1212 17:39:15.781563 2690 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-clxcs_calico-system(41d383b7-79ce-4986-93ed-9df24d00cb6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:39:15.781669 kubelet[2690]: E1212 17:39:15.781607 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-clxcs" podUID="41d383b7-79ce-4986-93ed-9df24d00cb6a" Dec 12 17:39:18.354535 kubelet[2690]: E1212 17:39:18.354336 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66f8d7bb8d-b4hvj" podUID="82dd8051-1568-452c-81da-df375ca13b0f" Dec 12 17:39:19.602815 systemd[1]: Started sshd@17-10.0.0.93:22-10.0.0.1:41956.service - OpenSSH per-connection server daemon (10.0.0.1:41956). Dec 12 17:39:19.682348 sshd[5008]: Accepted publickey for core from 10.0.0.1 port 41956 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:19.683750 sshd-session[5008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:19.688229 systemd-logind[1509]: New session 18 of user core. Dec 12 17:39:19.696494 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:39:19.820572 sshd[5011]: Connection closed by 10.0.0.1 port 41956 Dec 12 17:39:19.820894 sshd-session[5008]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:19.825064 systemd[1]: sshd@17-10.0.0.93:22-10.0.0.1:41956.service: Deactivated successfully. Dec 12 17:39:19.827156 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:39:19.828861 systemd-logind[1509]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:39:19.830101 systemd-logind[1509]: Removed session 18. Dec 12 17:39:24.834746 systemd[1]: Started sshd@18-10.0.0.93:22-10.0.0.1:56000.service - OpenSSH per-connection server daemon (10.0.0.1:56000). Dec 12 17:39:24.903063 sshd[5030]: Accepted publickey for core from 10.0.0.1 port 56000 ssh2: RSA SHA256:Fz/phd4oNW2GPuRhgfxzCU2cCuIqkc+QOLezvK8vTLg Dec 12 17:39:24.904572 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:39:24.909903 systemd-logind[1509]: New session 19 of user core. Dec 12 17:39:24.920445 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:39:25.134573 sshd[5033]: Connection closed by 10.0.0.1 port 56000 Dec 12 17:39:25.134906 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Dec 12 17:39:25.140338 systemd[1]: sshd@18-10.0.0.93:22-10.0.0.1:56000.service: Deactivated successfully. Dec 12 17:39:25.142176 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:39:25.143330 systemd-logind[1509]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:39:25.144910 systemd-logind[1509]: Removed session 19. Dec 12 17:39:26.353526 kubelet[2690]: E1212 17:39:26.353443 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f84df79cc-drm9h" podUID="93c57314-3f67-4215-bf04-8f4e3f7c0b74" Dec 12 17:39:26.355084 kubelet[2690]: E1212 17:39:26.354875 2690 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f6g67" podUID="36c49735-9da0-46fd-8634-a3bd00152ea8"